Test Report: KVM_Linux_crio 19443

                    
                      8b84af123e21bffd183d137e5ca9151109c81e73:2024-08-15:35789
                    
                

Test fail (30/312)

Order failed test Duration
34 TestAddons/parallel/Ingress 155.42
36 TestAddons/parallel/MetricsServer 290.74
45 TestAddons/StoppedEnableDisable 154.4
164 TestMultiControlPlane/serial/StopSecondaryNode 141.57
166 TestMultiControlPlane/serial/RestartSecondaryNode 50.78
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 409.01
171 TestMultiControlPlane/serial/StopCluster 141.49
231 TestMultiNode/serial/RestartKeepsNodes 324.51
233 TestMultiNode/serial/StopMultiNode 141.31
240 TestPreload 278.81
248 TestKubernetesUpgrade 410.35
284 TestStartStop/group/old-k8s-version/serial/FirstStart 265.41
285 TestPause/serial/SecondStartNoReconfiguration 56.95
292 TestStartStop/group/no-preload/serial/Stop 139.19
297 TestStartStop/group/embed-certs/serial/Stop 138.92
298 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 105.52
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.93
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
308 TestStartStop/group/old-k8s-version/serial/SecondStart 750.74
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
312 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544
313 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.18
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.1
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.3
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 530.46
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 428.71
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 313.4
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 134.5
x
+
TestAddons/parallel/Ingress (155.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-799058 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-799058 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-799058 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2dd945a2-dba6-4274-a0e9-67190b86b7cd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2dd945a2-dba6-4274-a0e9-67190b86b7cd] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003870387s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-799058 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.505164464s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-799058 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.195
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-799058 addons disable ingress-dns --alsologtostderr -v=1: (1.467715588s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-799058 addons disable ingress --alsologtostderr -v=1: (7.661396533s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-799058 -n addons-799058
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-799058 logs -n 25: (1.104955798s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-024815                                                                     | download-only-024815 | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC | 15 Aug 24 00:06 UTC |
	| delete  | -p download-only-303162                                                                     | download-only-303162 | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC | 15 Aug 24 00:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-836990 | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC |                     |
	|         | binary-mirror-836990                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37773                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-836990                                                                     | binary-mirror-836990 | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC | 15 Aug 24 00:06 UTC |
	| addons  | disable dashboard -p                                                                        | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC |                     |
	|         | addons-799058                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC |                     |
	|         | addons-799058                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-799058 --wait=true                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC | 15 Aug 24 00:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:09 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | addons-799058                                                                               |                      |         |         |                     |                     |
	| ip      | addons-799058 ip                                                                            | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | -p addons-799058                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | addons-799058                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | -p addons-799058                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-799058 ssh cat                                                                       | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | /opt/local-path-provisioner/pvc-91dd3a08-78ae-4a50-9888-964894be42ae_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:10 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-799058 addons                                                                        | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-799058 addons                                                                        | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:10 UTC | 15 Aug 24 00:10 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-799058 ssh curl -s                                                                   | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:10 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-799058 ip                                                                            | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:12 UTC | 15 Aug 24 00:12 UTC |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:12 UTC | 15 Aug 24 00:12 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:12 UTC | 15 Aug 24 00:12 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:06:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:06:10.190820   21011 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:06:10.190906   21011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:06:10.190914   21011 out.go:304] Setting ErrFile to fd 2...
	I0815 00:06:10.190918   21011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:06:10.191060   21011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:06:10.191619   21011 out.go:298] Setting JSON to false
	I0815 00:06:10.192431   21011 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2915,"bootTime":1723677455,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:06:10.192479   21011 start.go:139] virtualization: kvm guest
	I0815 00:06:10.194676   21011 out.go:177] * [addons-799058] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:06:10.196085   21011 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:06:10.196084   21011 notify.go:220] Checking for updates...
	I0815 00:06:10.198508   21011 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:06:10.199610   21011 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:06:10.200799   21011 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:06:10.201808   21011 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:06:10.202890   21011 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:06:10.204146   21011 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:06:10.234542   21011 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 00:06:10.235591   21011 start.go:297] selected driver: kvm2
	I0815 00:06:10.235614   21011 start.go:901] validating driver "kvm2" against <nil>
	I0815 00:06:10.235625   21011 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:06:10.236242   21011 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:06:10.236300   21011 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 00:06:10.249863   21011 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 00:06:10.249899   21011 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:06:10.250117   21011 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:06:10.250186   21011 cni.go:84] Creating CNI manager for ""
	I0815 00:06:10.250201   21011 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 00:06:10.250210   21011 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 00:06:10.250268   21011 start.go:340] cluster config:
	{Name:addons-799058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-799058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:06:10.250378   21011 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:06:10.252143   21011 out.go:177] * Starting "addons-799058" primary control-plane node in "addons-799058" cluster
	I0815 00:06:10.253332   21011 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:06:10.253357   21011 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:06:10.253363   21011 cache.go:56] Caching tarball of preloaded images
	I0815 00:06:10.253461   21011 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:06:10.253476   21011 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:06:10.253784   21011 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/config.json ...
	I0815 00:06:10.253805   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/config.json: {Name:mk8ebdac0451abf719046a00b1896a9a27305305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:10.253952   21011 start.go:360] acquireMachinesLock for addons-799058: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 00:06:10.254009   21011 start.go:364] duration metric: took 40.749µs to acquireMachinesLock for "addons-799058"
	I0815 00:06:10.254029   21011 start.go:93] Provisioning new machine with config: &{Name:addons-799058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-799058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:06:10.254104   21011 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 00:06:10.255574   21011 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0815 00:06:10.255700   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:10.255747   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:10.269223   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0815 00:06:10.269642   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:10.270102   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:10.270123   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:10.270485   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:10.270669   21011 main.go:141] libmachine: (addons-799058) Calling .GetMachineName
	I0815 00:06:10.270799   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:10.270922   21011 start.go:159] libmachine.API.Create for "addons-799058" (driver="kvm2")
	I0815 00:06:10.270949   21011 client.go:168] LocalClient.Create starting
	I0815 00:06:10.270985   21011 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem
	I0815 00:06:10.507109   21011 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem
	I0815 00:06:10.634737   21011 main.go:141] libmachine: Running pre-create checks...
	I0815 00:06:10.634761   21011 main.go:141] libmachine: (addons-799058) Calling .PreCreateCheck
	I0815 00:06:10.635209   21011 main.go:141] libmachine: (addons-799058) Calling .GetConfigRaw
	I0815 00:06:10.635608   21011 main.go:141] libmachine: Creating machine...
	I0815 00:06:10.635620   21011 main.go:141] libmachine: (addons-799058) Calling .Create
	I0815 00:06:10.635727   21011 main.go:141] libmachine: (addons-799058) Creating KVM machine...
	I0815 00:06:10.636869   21011 main.go:141] libmachine: (addons-799058) DBG | found existing default KVM network
	I0815 00:06:10.637556   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:10.637427   21032 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0815 00:06:10.637577   21011 main.go:141] libmachine: (addons-799058) DBG | created network xml: 
	I0815 00:06:10.637587   21011 main.go:141] libmachine: (addons-799058) DBG | <network>
	I0815 00:06:10.637594   21011 main.go:141] libmachine: (addons-799058) DBG |   <name>mk-addons-799058</name>
	I0815 00:06:10.637603   21011 main.go:141] libmachine: (addons-799058) DBG |   <dns enable='no'/>
	I0815 00:06:10.637611   21011 main.go:141] libmachine: (addons-799058) DBG |   
	I0815 00:06:10.637621   21011 main.go:141] libmachine: (addons-799058) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 00:06:10.637631   21011 main.go:141] libmachine: (addons-799058) DBG |     <dhcp>
	I0815 00:06:10.637641   21011 main.go:141] libmachine: (addons-799058) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 00:06:10.637651   21011 main.go:141] libmachine: (addons-799058) DBG |     </dhcp>
	I0815 00:06:10.637682   21011 main.go:141] libmachine: (addons-799058) DBG |   </ip>
	I0815 00:06:10.637703   21011 main.go:141] libmachine: (addons-799058) DBG |   
	I0815 00:06:10.637710   21011 main.go:141] libmachine: (addons-799058) DBG | </network>
	I0815 00:06:10.637717   21011 main.go:141] libmachine: (addons-799058) DBG | 
	I0815 00:06:10.642660   21011 main.go:141] libmachine: (addons-799058) DBG | trying to create private KVM network mk-addons-799058 192.168.39.0/24...
	I0815 00:06:10.703000   21011 main.go:141] libmachine: (addons-799058) DBG | private KVM network mk-addons-799058 192.168.39.0/24 created
	I0815 00:06:10.703036   21011 main.go:141] libmachine: (addons-799058) Setting up store path in /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058 ...
	I0815 00:06:10.703062   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:10.702929   21032 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:06:10.703078   21011 main.go:141] libmachine: (addons-799058) Building disk image from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 00:06:10.703091   21011 main.go:141] libmachine: (addons-799058) Downloading /home/jenkins/minikube-integration/19443-13088/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 00:06:10.960342   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:10.960237   21032 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa...
	I0815 00:06:11.251423   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:11.251295   21032 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/addons-799058.rawdisk...
	I0815 00:06:11.251443   21011 main.go:141] libmachine: (addons-799058) DBG | Writing magic tar header
	I0815 00:06:11.251452   21011 main.go:141] libmachine: (addons-799058) DBG | Writing SSH key tar header
	I0815 00:06:11.251465   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:11.251413   21032 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058 ...
	I0815 00:06:11.251582   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058
	I0815 00:06:11.251624   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines
	I0815 00:06:11.251637   21011 main.go:141] libmachine: (addons-799058) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058 (perms=drwx------)
	I0815 00:06:11.251657   21011 main.go:141] libmachine: (addons-799058) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines (perms=drwxr-xr-x)
	I0815 00:06:11.251668   21011 main.go:141] libmachine: (addons-799058) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube (perms=drwxr-xr-x)
	I0815 00:06:11.251686   21011 main.go:141] libmachine: (addons-799058) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088 (perms=drwxrwxr-x)
	I0815 00:06:11.251698   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:06:11.251707   21011 main.go:141] libmachine: (addons-799058) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 00:06:11.251716   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088
	I0815 00:06:11.251729   21011 main.go:141] libmachine: (addons-799058) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 00:06:11.251744   21011 main.go:141] libmachine: (addons-799058) Creating domain...
	I0815 00:06:11.251757   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 00:06:11.251767   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home/jenkins
	I0815 00:06:11.251774   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home
	I0815 00:06:11.251786   21011 main.go:141] libmachine: (addons-799058) DBG | Skipping /home - not owner
	I0815 00:06:11.252608   21011 main.go:141] libmachine: (addons-799058) define libvirt domain using xml: 
	I0815 00:06:11.252638   21011 main.go:141] libmachine: (addons-799058) <domain type='kvm'>
	I0815 00:06:11.252662   21011 main.go:141] libmachine: (addons-799058)   <name>addons-799058</name>
	I0815 00:06:11.252678   21011 main.go:141] libmachine: (addons-799058)   <memory unit='MiB'>4000</memory>
	I0815 00:06:11.252688   21011 main.go:141] libmachine: (addons-799058)   <vcpu>2</vcpu>
	I0815 00:06:11.252704   21011 main.go:141] libmachine: (addons-799058)   <features>
	I0815 00:06:11.252714   21011 main.go:141] libmachine: (addons-799058)     <acpi/>
	I0815 00:06:11.252728   21011 main.go:141] libmachine: (addons-799058)     <apic/>
	I0815 00:06:11.252739   21011 main.go:141] libmachine: (addons-799058)     <pae/>
	I0815 00:06:11.252747   21011 main.go:141] libmachine: (addons-799058)     
	I0815 00:06:11.252756   21011 main.go:141] libmachine: (addons-799058)   </features>
	I0815 00:06:11.252765   21011 main.go:141] libmachine: (addons-799058)   <cpu mode='host-passthrough'>
	I0815 00:06:11.252784   21011 main.go:141] libmachine: (addons-799058)   
	I0815 00:06:11.252802   21011 main.go:141] libmachine: (addons-799058)   </cpu>
	I0815 00:06:11.252815   21011 main.go:141] libmachine: (addons-799058)   <os>
	I0815 00:06:11.252826   21011 main.go:141] libmachine: (addons-799058)     <type>hvm</type>
	I0815 00:06:11.252837   21011 main.go:141] libmachine: (addons-799058)     <boot dev='cdrom'/>
	I0815 00:06:11.252850   21011 main.go:141] libmachine: (addons-799058)     <boot dev='hd'/>
	I0815 00:06:11.252866   21011 main.go:141] libmachine: (addons-799058)     <bootmenu enable='no'/>
	I0815 00:06:11.252878   21011 main.go:141] libmachine: (addons-799058)   </os>
	I0815 00:06:11.252888   21011 main.go:141] libmachine: (addons-799058)   <devices>
	I0815 00:06:11.252899   21011 main.go:141] libmachine: (addons-799058)     <disk type='file' device='cdrom'>
	I0815 00:06:11.252916   21011 main.go:141] libmachine: (addons-799058)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/boot2docker.iso'/>
	I0815 00:06:11.252928   21011 main.go:141] libmachine: (addons-799058)       <target dev='hdc' bus='scsi'/>
	I0815 00:06:11.252938   21011 main.go:141] libmachine: (addons-799058)       <readonly/>
	I0815 00:06:11.252961   21011 main.go:141] libmachine: (addons-799058)     </disk>
	I0815 00:06:11.252973   21011 main.go:141] libmachine: (addons-799058)     <disk type='file' device='disk'>
	I0815 00:06:11.252984   21011 main.go:141] libmachine: (addons-799058)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 00:06:11.253001   21011 main.go:141] libmachine: (addons-799058)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/addons-799058.rawdisk'/>
	I0815 00:06:11.253013   21011 main.go:141] libmachine: (addons-799058)       <target dev='hda' bus='virtio'/>
	I0815 00:06:11.253024   21011 main.go:141] libmachine: (addons-799058)     </disk>
	I0815 00:06:11.253037   21011 main.go:141] libmachine: (addons-799058)     <interface type='network'>
	I0815 00:06:11.253049   21011 main.go:141] libmachine: (addons-799058)       <source network='mk-addons-799058'/>
	I0815 00:06:11.253061   21011 main.go:141] libmachine: (addons-799058)       <model type='virtio'/>
	I0815 00:06:11.253068   21011 main.go:141] libmachine: (addons-799058)     </interface>
	I0815 00:06:11.253082   21011 main.go:141] libmachine: (addons-799058)     <interface type='network'>
	I0815 00:06:11.253098   21011 main.go:141] libmachine: (addons-799058)       <source network='default'/>
	I0815 00:06:11.253110   21011 main.go:141] libmachine: (addons-799058)       <model type='virtio'/>
	I0815 00:06:11.253121   21011 main.go:141] libmachine: (addons-799058)     </interface>
	I0815 00:06:11.253133   21011 main.go:141] libmachine: (addons-799058)     <serial type='pty'>
	I0815 00:06:11.253143   21011 main.go:141] libmachine: (addons-799058)       <target port='0'/>
	I0815 00:06:11.253153   21011 main.go:141] libmachine: (addons-799058)     </serial>
	I0815 00:06:11.253167   21011 main.go:141] libmachine: (addons-799058)     <console type='pty'>
	I0815 00:06:11.253183   21011 main.go:141] libmachine: (addons-799058)       <target type='serial' port='0'/>
	I0815 00:06:11.253194   21011 main.go:141] libmachine: (addons-799058)     </console>
	I0815 00:06:11.253206   21011 main.go:141] libmachine: (addons-799058)     <rng model='virtio'>
	I0815 00:06:11.253215   21011 main.go:141] libmachine: (addons-799058)       <backend model='random'>/dev/random</backend>
	I0815 00:06:11.253227   21011 main.go:141] libmachine: (addons-799058)     </rng>
	I0815 00:06:11.253239   21011 main.go:141] libmachine: (addons-799058)     
	I0815 00:06:11.253250   21011 main.go:141] libmachine: (addons-799058)     
	I0815 00:06:11.253259   21011 main.go:141] libmachine: (addons-799058)   </devices>
	I0815 00:06:11.253268   21011 main.go:141] libmachine: (addons-799058) </domain>
	I0815 00:06:11.253278   21011 main.go:141] libmachine: (addons-799058) 
	I0815 00:06:11.258761   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:c4:4c:bc in network default
	I0815 00:06:11.259268   21011 main.go:141] libmachine: (addons-799058) Ensuring networks are active...
	I0815 00:06:11.259294   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:11.259887   21011 main.go:141] libmachine: (addons-799058) Ensuring network default is active
	I0815 00:06:11.260117   21011 main.go:141] libmachine: (addons-799058) Ensuring network mk-addons-799058 is active
	I0815 00:06:11.260544   21011 main.go:141] libmachine: (addons-799058) Getting domain xml...
	I0815 00:06:11.261240   21011 main.go:141] libmachine: (addons-799058) Creating domain...
	I0815 00:06:12.861274   21011 main.go:141] libmachine: (addons-799058) Waiting to get IP...
	I0815 00:06:12.862014   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:12.862395   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:12.862441   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:12.862386   21032 retry.go:31] will retry after 269.705346ms: waiting for machine to come up
	I0815 00:06:13.133747   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:13.134124   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:13.134150   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:13.134088   21032 retry.go:31] will retry after 277.095287ms: waiting for machine to come up
	I0815 00:06:13.412503   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:13.412952   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:13.412984   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:13.412932   21032 retry.go:31] will retry after 404.245054ms: waiting for machine to come up
	I0815 00:06:13.818206   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:13.818662   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:13.818687   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:13.818599   21032 retry.go:31] will retry after 475.920955ms: waiting for machine to come up
	I0815 00:06:14.296251   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:14.296718   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:14.296747   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:14.296679   21032 retry.go:31] will retry after 541.891693ms: waiting for machine to come up
	I0815 00:06:14.840411   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:14.840884   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:14.840914   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:14.840835   21032 retry.go:31] will retry after 580.924582ms: waiting for machine to come up
	I0815 00:06:15.422974   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:15.423337   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:15.423360   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:15.423292   21032 retry.go:31] will retry after 737.223719ms: waiting for machine to come up
	I0815 00:06:16.161984   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:16.162317   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:16.162342   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:16.162294   21032 retry.go:31] will retry after 1.183276904s: waiting for machine to come up
	I0815 00:06:17.347441   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:17.347844   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:17.347865   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:17.347806   21032 retry.go:31] will retry after 1.210237149s: waiting for machine to come up
	I0815 00:06:18.560280   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:18.560748   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:18.560767   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:18.560710   21032 retry.go:31] will retry after 1.864257604s: waiting for machine to come up
	I0815 00:06:20.426824   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:20.427224   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:20.427251   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:20.427191   21032 retry.go:31] will retry after 2.012133674s: waiting for machine to come up
	I0815 00:06:22.441669   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:22.442169   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:22.442197   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:22.442123   21032 retry.go:31] will retry after 2.238688406s: waiting for machine to come up
	I0815 00:06:24.683348   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:24.683813   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:24.683837   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:24.683746   21032 retry.go:31] will retry after 4.140150604s: waiting for machine to come up
	I0815 00:06:28.827790   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:28.828251   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:28.828282   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:28.828191   21032 retry.go:31] will retry after 5.464126204s: waiting for machine to come up
	I0815 00:06:34.296492   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.296981   21011 main.go:141] libmachine: (addons-799058) Found IP for machine: 192.168.39.195
	I0815 00:06:34.296999   21011 main.go:141] libmachine: (addons-799058) Reserving static IP address...
	I0815 00:06:34.297023   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has current primary IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.297414   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find host DHCP lease matching {name: "addons-799058", mac: "52:54:00:e5:8d:47", ip: "192.168.39.195"} in network mk-addons-799058
	I0815 00:06:34.366887   21011 main.go:141] libmachine: (addons-799058) DBG | Getting to WaitForSSH function...
	I0815 00:06:34.366920   21011 main.go:141] libmachine: (addons-799058) Reserved static IP address: 192.168.39.195
	I0815 00:06:34.366967   21011 main.go:141] libmachine: (addons-799058) Waiting for SSH to be available...
	I0815 00:06:34.369425   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.369802   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.369829   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.370007   21011 main.go:141] libmachine: (addons-799058) DBG | Using SSH client type: external
	I0815 00:06:34.370046   21011 main.go:141] libmachine: (addons-799058) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa (-rw-------)
	I0815 00:06:34.370083   21011 main.go:141] libmachine: (addons-799058) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 00:06:34.370097   21011 main.go:141] libmachine: (addons-799058) DBG | About to run SSH command:
	I0815 00:06:34.370112   21011 main.go:141] libmachine: (addons-799058) DBG | exit 0
	I0815 00:06:34.500705   21011 main.go:141] libmachine: (addons-799058) DBG | SSH cmd err, output: <nil>: 
	I0815 00:06:34.501016   21011 main.go:141] libmachine: (addons-799058) KVM machine creation complete!
	I0815 00:06:34.501349   21011 main.go:141] libmachine: (addons-799058) Calling .GetConfigRaw
	I0815 00:06:34.501890   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:34.502104   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:34.502291   21011 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 00:06:34.502312   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:34.503609   21011 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 00:06:34.503627   21011 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 00:06:34.503639   21011 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 00:06:34.503646   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:34.506083   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.506469   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.506491   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.506543   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:34.506724   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.506864   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.507075   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:34.507241   21011 main.go:141] libmachine: Using SSH client type: native
	I0815 00:06:34.507413   21011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0815 00:06:34.507424   21011 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 00:06:34.607449   21011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:06:34.607472   21011 main.go:141] libmachine: Detecting the provisioner...
	I0815 00:06:34.607480   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:34.609946   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.610248   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.610281   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.610394   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:34.610567   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.610736   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.610864   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:34.611018   21011 main.go:141] libmachine: Using SSH client type: native
	I0815 00:06:34.611247   21011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0815 00:06:34.611264   21011 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 00:06:34.712976   21011 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 00:06:34.713063   21011 main.go:141] libmachine: found compatible host: buildroot
	I0815 00:06:34.713070   21011 main.go:141] libmachine: Provisioning with buildroot...
	I0815 00:06:34.713077   21011 main.go:141] libmachine: (addons-799058) Calling .GetMachineName
	I0815 00:06:34.713337   21011 buildroot.go:166] provisioning hostname "addons-799058"
	I0815 00:06:34.713374   21011 main.go:141] libmachine: (addons-799058) Calling .GetMachineName
	I0815 00:06:34.713534   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:34.716021   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.716314   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.716338   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.716506   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:34.716700   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.716856   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.716995   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:34.717159   21011 main.go:141] libmachine: Using SSH client type: native
	I0815 00:06:34.717309   21011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0815 00:06:34.717320   21011 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-799058 && echo "addons-799058" | sudo tee /etc/hostname
	I0815 00:06:34.828895   21011 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-799058
	
	I0815 00:06:34.828921   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:34.831482   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.831877   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.831906   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.832057   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:34.832211   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.832396   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.832519   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:34.832703   21011 main.go:141] libmachine: Using SSH client type: native
	I0815 00:06:34.832871   21011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0815 00:06:34.832893   21011 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-799058' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-799058/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-799058' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:06:34.940050   21011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:06:34.940083   21011 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 00:06:34.940110   21011 buildroot.go:174] setting up certificates
	I0815 00:06:34.940125   21011 provision.go:84] configureAuth start
	I0815 00:06:34.940134   21011 main.go:141] libmachine: (addons-799058) Calling .GetMachineName
	I0815 00:06:34.940372   21011 main.go:141] libmachine: (addons-799058) Calling .GetIP
	I0815 00:06:34.942815   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.943139   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.943167   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.943326   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:34.945351   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.945694   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.945720   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.945846   21011 provision.go:143] copyHostCerts
	I0815 00:06:34.945917   21011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 00:06:34.946041   21011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 00:06:34.946121   21011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 00:06:34.946187   21011 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.addons-799058 san=[127.0.0.1 192.168.39.195 addons-799058 localhost minikube]
	I0815 00:06:35.144674   21011 provision.go:177] copyRemoteCerts
	I0815 00:06:35.144743   21011 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:06:35.144771   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:35.147413   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.147693   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.147719   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.147910   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:35.148113   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.148231   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:35.148366   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:35.226572   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 00:06:35.248541   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:06:35.269897   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 00:06:35.291544   21011 provision.go:87] duration metric: took 351.409181ms to configureAuth
	I0815 00:06:35.291568   21011 buildroot.go:189] setting minikube options for container-runtime
	I0815 00:06:35.291741   21011 config.go:182] Loaded profile config "addons-799058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:06:35.291813   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:35.294511   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.294825   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.294849   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.294999   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:35.295233   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.295390   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.295526   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:35.295676   21011 main.go:141] libmachine: Using SSH client type: native
	I0815 00:06:35.295830   21011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0815 00:06:35.295845   21011 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:06:35.552944   21011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:06:35.552978   21011 main.go:141] libmachine: Checking connection to Docker...
	I0815 00:06:35.552990   21011 main.go:141] libmachine: (addons-799058) Calling .GetURL
	I0815 00:06:35.554503   21011 main.go:141] libmachine: (addons-799058) DBG | Using libvirt version 6000000
	I0815 00:06:35.556782   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.557162   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.557191   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.557356   21011 main.go:141] libmachine: Docker is up and running!
	I0815 00:06:35.557376   21011 main.go:141] libmachine: Reticulating splines...
	I0815 00:06:35.557383   21011 client.go:171] duration metric: took 25.286426747s to LocalClient.Create
	I0815 00:06:35.557404   21011 start.go:167] duration metric: took 25.286481251s to libmachine.API.Create "addons-799058"
	I0815 00:06:35.557417   21011 start.go:293] postStartSetup for "addons-799058" (driver="kvm2")
	I0815 00:06:35.557436   21011 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:06:35.557454   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:35.557707   21011 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:06:35.557732   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:35.560242   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.560673   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.560698   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.560840   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:35.561010   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.561159   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:35.561289   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:35.642584   21011 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:06:35.646522   21011 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 00:06:35.646544   21011 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 00:06:35.646621   21011 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 00:06:35.646647   21011 start.go:296] duration metric: took 89.218187ms for postStartSetup
	I0815 00:06:35.646679   21011 main.go:141] libmachine: (addons-799058) Calling .GetConfigRaw
	I0815 00:06:35.647207   21011 main.go:141] libmachine: (addons-799058) Calling .GetIP
	I0815 00:06:35.649533   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.649822   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.649848   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.650047   21011 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/config.json ...
	I0815 00:06:35.650216   21011 start.go:128] duration metric: took 25.396100957s to createHost
	I0815 00:06:35.650237   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:35.652512   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.652785   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.652812   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.652963   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:35.653132   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.653267   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.653400   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:35.653534   21011 main.go:141] libmachine: Using SSH client type: native
	I0815 00:06:35.653734   21011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0815 00:06:35.653749   21011 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 00:06:35.752917   21011 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723680395.730873362
	
	I0815 00:06:35.752935   21011 fix.go:216] guest clock: 1723680395.730873362
	I0815 00:06:35.752942   21011 fix.go:229] Guest: 2024-08-15 00:06:35.730873362 +0000 UTC Remote: 2024-08-15 00:06:35.650227152 +0000 UTC m=+25.491307107 (delta=80.64621ms)
	I0815 00:06:35.752981   21011 fix.go:200] guest clock delta is within tolerance: 80.64621ms
	I0815 00:06:35.752987   21011 start.go:83] releasing machines lock for "addons-799058", held for 25.498966551s
	I0815 00:06:35.753006   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:35.753269   21011 main.go:141] libmachine: (addons-799058) Calling .GetIP
	I0815 00:06:35.755785   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.756172   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.756200   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.756311   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:35.756759   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:35.756931   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:35.757027   21011 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:06:35.757076   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:35.757136   21011 ssh_runner.go:195] Run: cat /version.json
	I0815 00:06:35.757160   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:35.759665   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.759989   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.760016   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.760034   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.760181   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:35.760335   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.760407   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.760435   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.760505   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:35.760610   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:35.760694   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:35.760850   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.761011   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:35.761143   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:35.879186   21011 ssh_runner.go:195] Run: systemctl --version
	I0815 00:06:35.885448   21011 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:06:36.044090   21011 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 00:06:36.049846   21011 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 00:06:36.049905   21011 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:06:36.064232   21011 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 00:06:36.064254   21011 start.go:495] detecting cgroup driver to use...
	I0815 00:06:36.064305   21011 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:06:36.078926   21011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:06:36.092167   21011 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:06:36.092219   21011 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:06:36.105009   21011 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:06:36.117801   21011 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:06:36.230456   21011 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:06:36.368784   21011 docker.go:233] disabling docker service ...
	I0815 00:06:36.368854   21011 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:06:36.383097   21011 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:06:36.395202   21011 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:06:36.529505   21011 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:06:36.646399   21011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:06:36.658932   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:06:36.676100   21011 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:06:36.676179   21011 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.685818   21011 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:06:36.685886   21011 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.695388   21011 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.704858   21011 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.714417   21011 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:06:36.723945   21011 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.733195   21011 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.748766   21011 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.758117   21011 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:06:36.766482   21011 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 00:06:36.766534   21011 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 00:06:36.777972   21011 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:06:36.786465   21011 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:06:36.898183   21011 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:06:37.025230   21011 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:06:37.025322   21011 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:06:37.029933   21011 start.go:563] Will wait 60s for crictl version
	I0815 00:06:37.030005   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:06:37.033417   21011 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:06:37.072396   21011 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 00:06:37.072508   21011 ssh_runner.go:195] Run: crio --version
	I0815 00:06:37.098595   21011 ssh_runner.go:195] Run: crio --version
	I0815 00:06:37.124731   21011 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 00:06:37.125917   21011 main.go:141] libmachine: (addons-799058) Calling .GetIP
	I0815 00:06:37.128483   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:37.128946   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:37.128974   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:37.129162   21011 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 00:06:37.133185   21011 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:06:37.144483   21011 kubeadm.go:883] updating cluster {Name:addons-799058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-799058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:06:37.144585   21011 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:06:37.144625   21011 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:06:37.174107   21011 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 00:06:37.174176   21011 ssh_runner.go:195] Run: which lz4
	I0815 00:06:37.177693   21011 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 00:06:37.181238   21011 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 00:06:37.181263   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 00:06:38.242706   21011 crio.go:462] duration metric: took 1.065040637s to copy over tarball
	I0815 00:06:38.242788   21011 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 00:06:40.288709   21011 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.045888537s)
	I0815 00:06:40.288737   21011 crio.go:469] duration metric: took 2.046004098s to extract the tarball
	I0815 00:06:40.288744   21011 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 00:06:40.324163   21011 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:06:40.361857   21011 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:06:40.361877   21011 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:06:40.361884   21011 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.31.0 crio true true} ...
	I0815 00:06:40.362002   21011 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-799058 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-799058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:06:40.362076   21011 ssh_runner.go:195] Run: crio config
	I0815 00:06:40.408991   21011 cni.go:84] Creating CNI manager for ""
	I0815 00:06:40.409007   21011 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 00:06:40.409015   21011 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:06:40.409035   21011 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-799058 NodeName:addons-799058 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:06:40.409185   21011 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-799058"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:06:40.409254   21011 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:06:40.418321   21011 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:06:40.418379   21011 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 00:06:40.427030   21011 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0815 00:06:40.442309   21011 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:06:40.457211   21011 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0815 00:06:40.472017   21011 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I0815 00:06:40.475430   21011 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:06:40.486026   21011 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:06:40.604111   21011 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:06:40.619831   21011 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058 for IP: 192.168.39.195
	I0815 00:06:40.619860   21011 certs.go:194] generating shared ca certs ...
	I0815 00:06:40.619880   21011 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:40.620036   21011 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 00:06:40.825973   21011 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt ...
	I0815 00:06:40.826000   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt: {Name:mkd3e103dfde5f206ead9a3e4d8372a081099209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:40.826158   21011 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key ...
	I0815 00:06:40.826175   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key: {Name:mk858692bd11cbc88063c41a856d1ac58611345d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:40.826248   21011 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 00:06:40.997336   21011 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt ...
	I0815 00:06:40.997366   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt: {Name:mke403b2a0c9b8a48d4da4e9d029de98a1d02c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:40.997535   21011 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key ...
	I0815 00:06:40.997546   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key: {Name:mkea6fc1db5986e1d892c17d1aa0b30b9bc24b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:40.997615   21011 certs.go:256] generating profile certs ...
	I0815 00:06:40.997671   21011 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.key
	I0815 00:06:40.997685   21011 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt with IP's: []
	I0815 00:06:41.047187   21011 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt ...
	I0815 00:06:41.047213   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: {Name:mkc8ff87590ba027b7b2e49b84053e4ac4e7196b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:41.047363   21011 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.key ...
	I0815 00:06:41.047373   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.key: {Name:mk68c7c40f8d859acb7013258245941eb8d6c252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:41.047444   21011 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.key.1f59b016
	I0815 00:06:41.047462   21011 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.crt.1f59b016 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I0815 00:06:41.400706   21011 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.crt.1f59b016 ...
	I0815 00:06:41.400740   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.crt.1f59b016: {Name:mk883b5f3f1cc11cbbc4632f9f43ffe1babbaa44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:41.400899   21011 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.key.1f59b016 ...
	I0815 00:06:41.400912   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.key.1f59b016: {Name:mk65edcec3fd42e0963f07457048457a5f14bf3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:41.400996   21011 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.crt.1f59b016 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.crt
	I0815 00:06:41.401065   21011 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.key.1f59b016 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.key
	I0815 00:06:41.401110   21011 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.key
	I0815 00:06:41.401127   21011 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.crt with IP's: []
	I0815 00:06:41.537178   21011 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.crt ...
	I0815 00:06:41.537206   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.crt: {Name:mk5198a1a578e019397de305f73cca9eca2115fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:41.537368   21011 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.key ...
	I0815 00:06:41.537379   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.key: {Name:mk5d87737b41328f6b5573db35e9853260839abb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:41.537534   21011 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:06:41.537565   21011 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:06:41.537587   21011 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:06:41.537610   21011 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 00:06:41.538144   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:06:41.562161   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:06:41.583393   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:06:41.604191   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:06:41.624431   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 00:06:41.644971   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:06:41.666593   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:06:41.687172   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 00:06:41.708080   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:06:41.729273   21011 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:06:41.743748   21011 ssh_runner.go:195] Run: openssl version
	I0815 00:06:41.748708   21011 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:06:41.758100   21011 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:06:41.761839   21011 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:06:41.761891   21011 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:06:41.766903   21011 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:06:41.776338   21011 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:06:41.779792   21011 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:06:41.779842   21011 kubeadm.go:392] StartCluster: {Name:addons-799058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-799058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:06:41.779926   21011 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 00:06:41.779979   21011 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:06:41.813703   21011 cri.go:89] found id: ""
	I0815 00:06:41.813763   21011 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 00:06:41.822906   21011 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 00:06:41.831651   21011 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 00:06:41.840200   21011 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 00:06:41.840216   21011 kubeadm.go:157] found existing configuration files:
	
	I0815 00:06:41.840249   21011 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 00:06:41.848195   21011 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 00:06:41.848260   21011 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 00:06:41.858131   21011 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 00:06:41.866020   21011 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 00:06:41.866061   21011 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 00:06:41.874217   21011 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 00:06:41.882165   21011 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 00:06:41.882218   21011 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 00:06:41.890419   21011 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 00:06:41.898707   21011 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 00:06:41.898782   21011 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 00:06:41.906860   21011 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 00:06:41.953494   21011 kubeadm.go:310] W0815 00:06:41.937366     825 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:06:41.954210   21011 kubeadm.go:310] W0815 00:06:41.938125     825 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:06:42.060208   21011 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 00:06:51.849218   21011 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 00:06:51.849285   21011 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 00:06:51.849368   21011 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 00:06:51.849519   21011 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 00:06:51.849629   21011 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 00:06:51.849688   21011 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 00:06:51.851387   21011 out.go:204]   - Generating certificates and keys ...
	I0815 00:06:51.851475   21011 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 00:06:51.851532   21011 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 00:06:51.851588   21011 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 00:06:51.851636   21011 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 00:06:51.851697   21011 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 00:06:51.851795   21011 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 00:06:51.851866   21011 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 00:06:51.852029   21011 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-799058 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0815 00:06:51.852082   21011 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 00:06:51.852185   21011 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-799058 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0815 00:06:51.852264   21011 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 00:06:51.852354   21011 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 00:06:51.852423   21011 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 00:06:51.852499   21011 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 00:06:51.852579   21011 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 00:06:51.852638   21011 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 00:06:51.852721   21011 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 00:06:51.852798   21011 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 00:06:51.852851   21011 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 00:06:51.852934   21011 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 00:06:51.853028   21011 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 00:06:51.854515   21011 out.go:204]   - Booting up control plane ...
	I0815 00:06:51.854610   21011 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 00:06:51.854711   21011 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 00:06:51.854780   21011 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 00:06:51.854869   21011 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 00:06:51.854957   21011 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 00:06:51.855007   21011 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 00:06:51.855150   21011 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 00:06:51.855251   21011 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 00:06:51.855303   21011 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.262567ms
	I0815 00:06:51.855365   21011 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 00:06:51.855423   21011 kubeadm.go:310] [api-check] The API server is healthy after 5.002286644s
	I0815 00:06:51.855518   21011 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 00:06:51.855625   21011 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 00:06:51.855676   21011 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 00:06:51.855830   21011 kubeadm.go:310] [mark-control-plane] Marking the node addons-799058 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 00:06:51.855878   21011 kubeadm.go:310] [bootstrap-token] Using token: r61chi.auagym2grvm1kzxt
	I0815 00:06:51.857298   21011 out.go:204]   - Configuring RBAC rules ...
	I0815 00:06:51.857387   21011 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 00:06:51.857482   21011 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 00:06:51.857679   21011 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 00:06:51.857802   21011 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 00:06:51.857931   21011 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 00:06:51.858002   21011 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 00:06:51.858129   21011 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 00:06:51.858170   21011 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 00:06:51.858221   21011 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 00:06:51.858230   21011 kubeadm.go:310] 
	I0815 00:06:51.858310   21011 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 00:06:51.858319   21011 kubeadm.go:310] 
	I0815 00:06:51.858400   21011 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 00:06:51.858410   21011 kubeadm.go:310] 
	I0815 00:06:51.858435   21011 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 00:06:51.858501   21011 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 00:06:51.858555   21011 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 00:06:51.858561   21011 kubeadm.go:310] 
	I0815 00:06:51.858607   21011 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 00:06:51.858616   21011 kubeadm.go:310] 
	I0815 00:06:51.858661   21011 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 00:06:51.858667   21011 kubeadm.go:310] 
	I0815 00:06:51.858722   21011 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 00:06:51.858785   21011 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 00:06:51.858873   21011 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 00:06:51.858883   21011 kubeadm.go:310] 
	I0815 00:06:51.858997   21011 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 00:06:51.859064   21011 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 00:06:51.859070   21011 kubeadm.go:310] 
	I0815 00:06:51.859197   21011 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r61chi.auagym2grvm1kzxt \
	I0815 00:06:51.859334   21011 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 00:06:51.859354   21011 kubeadm.go:310] 	--control-plane 
	I0815 00:06:51.859360   21011 kubeadm.go:310] 
	I0815 00:06:51.859468   21011 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 00:06:51.859476   21011 kubeadm.go:310] 
	I0815 00:06:51.859550   21011 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r61chi.auagym2grvm1kzxt \
	I0815 00:06:51.859643   21011 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 00:06:51.859658   21011 cni.go:84] Creating CNI manager for ""
	I0815 00:06:51.859669   21011 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 00:06:51.861173   21011 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 00:06:51.862308   21011 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 00:06:51.876287   21011 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 00:06:51.892502   21011 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 00:06:51.892542   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:51.892591   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-799058 minikube.k8s.io/updated_at=2024_08_15T00_06_51_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=addons-799058 minikube.k8s.io/primary=true
	I0815 00:06:51.916485   21011 ops.go:34] apiserver oom_adj: -16
	I0815 00:06:52.006970   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:52.507109   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:53.007408   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:53.507652   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:54.008006   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:54.507969   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:55.007394   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:55.507114   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:56.007582   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:56.508018   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:56.576312   21011 kubeadm.go:1113] duration metric: took 4.683829214s to wait for elevateKubeSystemPrivileges
	I0815 00:06:56.576350   21011 kubeadm.go:394] duration metric: took 14.796511743s to StartCluster
	I0815 00:06:56.576378   21011 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:56.576499   21011 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:06:56.576857   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:56.577031   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 00:06:56.577070   21011 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:06:56.577118   21011 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 00:06:56.577208   21011 addons.go:69] Setting yakd=true in profile "addons-799058"
	I0815 00:06:56.577229   21011 addons.go:69] Setting inspektor-gadget=true in profile "addons-799058"
	I0815 00:06:56.577242   21011 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-799058"
	I0815 00:06:56.577250   21011 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-799058"
	I0815 00:06:56.577241   21011 addons.go:69] Setting storage-provisioner=true in profile "addons-799058"
	I0815 00:06:56.577275   21011 addons.go:69] Setting volumesnapshots=true in profile "addons-799058"
	I0815 00:06:56.577283   21011 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-799058"
	I0815 00:06:56.577284   21011 addons.go:69] Setting ingress-dns=true in profile "addons-799058"
	I0815 00:06:56.577289   21011 addons.go:69] Setting cloud-spanner=true in profile "addons-799058"
	I0815 00:06:56.577289   21011 config.go:182] Loaded profile config "addons-799058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:06:56.577300   21011 addons.go:234] Setting addon volumesnapshots=true in "addons-799058"
	I0815 00:06:56.577301   21011 addons.go:69] Setting registry=true in profile "addons-799058"
	I0815 00:06:56.577310   21011 addons.go:69] Setting metrics-server=true in profile "addons-799058"
	I0815 00:06:56.577313   21011 addons.go:234] Setting addon cloud-spanner=true in "addons-799058"
	I0815 00:06:56.577319   21011 addons.go:234] Setting addon registry=true in "addons-799058"
	I0815 00:06:56.577326   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577328   21011 addons.go:234] Setting addon metrics-server=true in "addons-799058"
	I0815 00:06:56.577332   21011 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-799058"
	I0815 00:06:56.577336   21011 addons.go:69] Setting default-storageclass=true in profile "addons-799058"
	I0815 00:06:56.577344   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577350   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577362   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577370   21011 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-799058"
	I0815 00:06:56.577376   21011 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-799058"
	I0815 00:06:56.577394   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577674   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.577691   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.577304   21011 addons.go:234] Setting addon ingress-dns=true in "addons-799058"
	I0815 00:06:56.577746   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.577752   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.577758   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.577764   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.577769   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.577770   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577794   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.577795   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.577815   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.577326   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577276   21011 addons.go:69] Setting helm-tiller=true in profile "addons-799058"
	I0815 00:06:56.577881   21011 addons.go:234] Setting addon helm-tiller=true in "addons-799058"
	I0815 00:06:56.577262   21011 addons.go:69] Setting gcp-auth=true in profile "addons-799058"
	I0815 00:06:56.577898   21011 mustload.go:65] Loading cluster: addons-799058
	I0815 00:06:56.577294   21011 addons.go:234] Setting addon storage-provisioner=true in "addons-799058"
	I0815 00:06:56.577236   21011 addons.go:234] Setting addon yakd=true in "addons-799058"
	I0815 00:06:56.577269   21011 addons.go:69] Setting volcano=true in profile "addons-799058"
	I0815 00:06:56.577926   21011 addons.go:234] Setting addon volcano=true in "addons-799058"
	I0815 00:06:56.577267   21011 addons.go:234] Setting addon inspektor-gadget=true in "addons-799058"
	I0815 00:06:56.577950   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.577962   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.577280   21011 addons.go:69] Setting ingress=true in profile "addons-799058"
	I0815 00:06:56.577262   21011 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-799058"
	I0815 00:06:56.578089   21011 addons.go:234] Setting addon ingress=true in "addons-799058"
	I0815 00:06:56.578126   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.578159   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.578188   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.578248   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.578283   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.578452   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.578472   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.578574   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.578595   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.578649   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.578656   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.578665   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.578768   21011 config.go:182] Loaded profile config "addons-799058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:06:56.578781   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.578883   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.578910   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.578981   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.579003   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.579067   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.579088   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.579099   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.579112   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.579122   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.579145   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.580107   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.580495   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.580521   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.581232   21011 out.go:177] * Verifying Kubernetes components...
	I0815 00:06:56.582691   21011 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:06:56.598247   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0815 00:06:56.598246   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36949
	I0815 00:06:56.598802   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.598909   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.599306   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.599324   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.599438   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.599456   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.599561   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I0815 00:06:56.599718   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.599903   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.599959   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.600048   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.600645   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.600674   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.600960   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.601029   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.601046   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.601936   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.601984   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.602888   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41113
	I0815 00:06:56.603453   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.603853   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.603873   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.604315   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0815 00:06:56.604469   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.604618   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.605183   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.605218   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.610411   21011 addons.go:234] Setting addon default-storageclass=true in "addons-799058"
	I0815 00:06:56.610451   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.610807   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.610840   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.616876   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I0815 00:06:56.617056   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.617071   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.617456   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.617640   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.618212   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.618231   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.618523   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.618559   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.618888   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.619466   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.619502   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.623694   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35035
	I0815 00:06:56.624153   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.624694   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.624712   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.625059   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.625643   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.625676   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.638808   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0815 00:06:56.640954   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0815 00:06:56.641454   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.642019   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.642036   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.642403   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.642588   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.642758   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I0815 00:06:56.643438   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.643918   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.643938   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.644271   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.644401   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.644840   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.645095   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.645306   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.645326   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.645629   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.645904   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.646609   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.647050   21011 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 00:06:56.647687   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.648209   21011 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 00:06:56.648265   21011 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 00:06:56.648285   21011 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 00:06:56.648329   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.649134   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 00:06:56.650550   21011 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 00:06:56.650565   21011 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 00:06:56.650582   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.650729   21011 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:06:56.650738   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 00:06:56.650752   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.651913   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.651948   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.651965   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.652148   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46303
	I0815 00:06:56.652310   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.652462   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.652702   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.653002   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42549
	I0815 00:06:56.653038   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.653099   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.653971   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.653999   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.654073   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.654682   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.654700   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.654795   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.655233   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.655581   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.655645   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.655850   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.655859   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.655887   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.656069   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.656405   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.656608   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.656789   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.656946   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.656991   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.657529   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.657558   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.657708   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.657897   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0815 00:06:56.657901   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.658616   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.658618   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I0815 00:06:56.658757   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.672552   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0815 00:06:56.672566   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0815 00:06:56.672619   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.672634   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I0815 00:06:56.672785   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.672797   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0815 00:06:56.672820   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38931
	I0815 00:06:56.672571   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.673404   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.673409   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.673529   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.673552   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.673567   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.673577   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.673589   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.673617   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.674375   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.674396   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.674537   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.674552   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.674598   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.674698   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.674712   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.674750   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.674842   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.674855   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.675047   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.675139   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.675202   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.675236   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.675256   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.675475   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.675602   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.676015   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.676056   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.676240   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.676272   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.676388   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 00:06:56.676483   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44475
	I0815 00:06:56.677049   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.677080   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.677123   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.677287   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I0815 00:06:56.677694   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.677708   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.677737   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.677799   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.678626   21011 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-799058"
	I0815 00:06:56.678669   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.678756   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.678776   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 00:06:56.678780   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.678971   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.678984   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.679003   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.679028   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.679338   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.679374   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.679570   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.679593   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.679612   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.679631   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.679975   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.680133   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.680218   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.680252   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.680284   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.680334   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.680522   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 00:06:56.681388   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 00:06:56.682467   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.683351   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 00:06:56.683410   21011 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 00:06:56.684381   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 00:06:56.684427   21011 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:06:56.684441   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 00:06:56.684457   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.686506   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0815 00:06:56.687431   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 00:06:56.687918   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.688329   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 00:06:56.688345   21011 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 00:06:56.688539   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.688542   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.688561   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.688587   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.688733   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.688881   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.689061   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.691749   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.692142   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.692165   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.692510   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.692760   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.692931   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.693118   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.702500   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39761
	I0815 00:06:56.702936   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.703358   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.703372   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.703627   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.704017   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.704045   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.706460   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39015
	I0815 00:06:56.707252   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.707583   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36731
	I0815 00:06:56.708022   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.708328   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.708344   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.708445   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.708451   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.708789   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.708946   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.708982   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.709547   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.709585   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.709934   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32917
	I0815 00:06:56.710254   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.710678   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.710693   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.710940   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.710999   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.711475   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.713094   21011 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 00:06:56.714028   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I0815 00:06:56.714064   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0815 00:06:56.714223   21011 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 00:06:56.714236   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 00:06:56.714253   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.714422   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.715106   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.715127   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.715155   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.715678   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.715879   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.716026   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.716042   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.716785   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.716944   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.718608   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.718649   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.719050   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.719107   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.719122   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.719180   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.719307   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.719422   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.720189   21011 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 00:06:56.720555   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.721363   21011 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:06:56.721386   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 00:06:56.721402   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.722130   21011 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 00:06:56.723223   21011 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 00:06:56.723242   21011 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 00:06:56.723257   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.724763   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.725545   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.725551   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.725566   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.725703   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.725838   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.725936   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.727144   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.727617   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.727635   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.727804   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.728875   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44285
	I0815 00:06:56.728978   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.729122   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.729246   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.729698   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.730015   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.730027   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.731595   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37213
	I0815 00:06:56.731611   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.731866   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.732280   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.732788   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.732810   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.733151   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.733388   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.734350   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.734466   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36207
	I0815 00:06:56.735062   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.735409   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.735603   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.735617   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.735880   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.736035   21011 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0815 00:06:56.736067   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.737795   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.738126   21011 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0815 00:06:56.738629   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0815 00:06:56.738896   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.739148   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0815 00:06:56.739188   21011 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 00:06:56.739236   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.739248   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.739290   21011 out.go:177]   - Using image docker.io/busybox:stable
	I0815 00:06:56.739418   21011 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0815 00:06:56.739428   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0815 00:06:56.739439   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.739909   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.740100   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.740451   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.740830   21011 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 00:06:56.740843   21011 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 00:06:56.740881   21011 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:06:56.740881   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.740889   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 00:06:56.740963   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.741797   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.741542   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.742519   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.742611   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:06:56.742622   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:06:56.742764   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:06:56.742777   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:06:56.742785   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:06:56.742792   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:06:56.742995   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:06:56.743050   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:06:56.743062   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	W0815 00:06:56.743147   21011 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0815 00:06:56.743533   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.743837   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.744972   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.745445   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.745465   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.745491   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.745789   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.745803   21011 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 00:06:56.745811   21011 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 00:06:56.745821   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.745792   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.745850   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.745873   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.745975   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.746040   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.746341   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.746339   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.746735   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.746789   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.747030   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.748178   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.748443   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.748620   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.748709   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.748850   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.748869   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.748903   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.749075   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.749076   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.749240   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.749268   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.749329   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.749359   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.749434   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	W0815 00:06:56.752723   21011 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56348->192.168.39.195:22: read: connection reset by peer
	I0815 00:06:56.752745   21011 retry.go:31] will retry after 267.092203ms: ssh: handshake failed: read tcp 192.168.39.1:56348->192.168.39.195:22: read: connection reset by peer
	I0815 00:06:56.754715   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40927
	I0815 00:06:56.755055   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.755479   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.755499   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.755743   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.755899   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.756753   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0815 00:06:56.757126   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.757379   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.757553   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.757570   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.757985   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.758199   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.758823   21011 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 00:06:56.759522   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.760798   21011 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 00:06:56.760804   21011 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:06:56.761788   21011 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:06:56.761794   21011 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 00:06:56.761809   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 00:06:56.761822   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.762835   21011 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 00:06:56.764255   21011 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:06:56.764307   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 00:06:56.764355   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.765186   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.765563   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.765586   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.765782   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.766060   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.766191   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.766306   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.766882   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.767211   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.767230   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.767344   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.767506   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.767650   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.767750   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:57.105630   21011 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 00:06:57.105653   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 00:06:57.137778   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 00:06:57.172641   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 00:06:57.172675   21011 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 00:06:57.175519   21011 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 00:06:57.175542   21011 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 00:06:57.182410   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:06:57.196563   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:06:57.198261   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:06:57.213733   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:06:57.254905   21011 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 00:06:57.254935   21011 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 00:06:57.262572   21011 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 00:06:57.262594   21011 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 00:06:57.264351   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:06:57.322930   21011 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 00:06:57.322959   21011 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 00:06:57.323258   21011 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0815 00:06:57.323271   21011 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0815 00:06:57.326631   21011 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 00:06:57.326650   21011 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 00:06:57.375149   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 00:06:57.375171   21011 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 00:06:57.402693   21011 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 00:06:57.402721   21011 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 00:06:57.418190   21011 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:06:57.418227   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 00:06:57.458709   21011 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:06:57.458735   21011 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 00:06:57.484059   21011 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 00:06:57.484080   21011 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 00:06:57.533052   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 00:06:57.546691   21011 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:06:57.546718   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 00:06:57.549539   21011 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 00:06:57.549555   21011 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 00:06:57.587081   21011 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 00:06:57.587107   21011 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0815 00:06:57.622802   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 00:06:57.622824   21011 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 00:06:57.625074   21011 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 00:06:57.625087   21011 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 00:06:57.678934   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:06:57.740199   21011 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 00:06:57.740224   21011 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 00:06:57.780477   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:06:57.781743   21011 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 00:06:57.781762   21011 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 00:06:57.792016   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 00:06:57.792038   21011 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 00:06:57.796864   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 00:06:57.840190   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 00:06:57.840222   21011 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 00:06:57.904137   21011 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:06:57.904159   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 00:06:58.001070   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 00:06:58.001107   21011 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 00:06:58.005124   21011 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 00:06:58.005142   21011 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 00:06:58.024988   21011 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:06:58.025005   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 00:06:58.149055   21011 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 00:06:58.149084   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 00:06:58.160097   21011 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 00:06:58.160118   21011 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 00:06:58.163844   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:06:58.192811   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:06:58.426506   21011 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 00:06:58.426537   21011 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 00:06:58.462891   21011 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 00:06:58.462914   21011 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 00:06:58.587145   21011 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:06:58.587166   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 00:06:58.764676   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:06:58.836674   21011 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 00:06:58.836696   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 00:06:59.229260   21011 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 00:06:59.229280   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 00:06:59.381097   21011 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:06:59.381128   21011 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 00:06:59.639603   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:07:00.016940   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.879122854s)
	I0815 00:07:00.016994   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:00.017006   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:00.017378   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:00.017385   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:00.017413   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:00.017430   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:00.017440   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:00.017691   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:00.017710   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:01.140080   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.957634891s)
	I0815 00:07:01.140136   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:01.140147   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:01.140436   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:01.140454   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:01.140472   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:01.140480   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:01.140731   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:01.140744   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:03.758192   21011 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 00:07:03.758234   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:07:03.761408   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:07:03.761894   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:07:03.761928   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:07:03.762138   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:07:03.762381   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:07:03.762547   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:07:03.762694   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:07:04.112114   21011 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 00:07:04.201551   21011 addons.go:234] Setting addon gcp-auth=true in "addons-799058"
	I0815 00:07:04.201596   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:07:04.201916   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:07:04.201941   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:07:04.218067   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I0815 00:07:04.218497   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:07:04.218948   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:07:04.218969   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:07:04.219246   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:07:04.219676   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:07:04.219701   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:07:04.234596   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0815 00:07:04.235061   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:07:04.235547   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:07:04.235572   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:07:04.235884   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:07:04.236065   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:07:04.237688   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:07:04.237914   21011 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 00:07:04.237935   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:07:04.240721   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:07:04.241083   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:07:04.241110   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:07:04.241269   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:07:04.241458   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:07:04.241620   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:07:04.241738   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:07:04.563901   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.367299971s)
	I0815 00:07:04.563950   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.563962   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.563964   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.365674876s)
	I0815 00:07:04.563996   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564013   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564110   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.350354461s)
	I0815 00:07:04.564138   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564148   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564157   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.299780618s)
	I0815 00:07:04.564178   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564193   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564210   21011 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.145991516s)
	I0815 00:07:04.564230   21011 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.145980709s)
	I0815 00:07:04.564244   21011 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 00:07:04.564255   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.031177286s)
	I0815 00:07:04.564271   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564279   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564391   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.885427003s)
	I0815 00:07:04.564426   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564440   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564523   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.564524   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.784023893s)
	I0815 00:07:04.564551   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564562   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564589   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.767700415s)
	I0815 00:07:04.564609   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564618   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564640   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.564665   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.564676   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564683   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564720   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.400843776s)
	I0815 00:07:04.564738   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564748   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564872   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.372029944s)
	W0815 00:07:04.564896   21011 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:07:04.564919   21011 retry.go:31] will retry after 221.047494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:07:04.564972   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.564983   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.564992   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564992   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.564995   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.800292192s)
	I0815 00:07:04.565014   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.565020   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.565025   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.565029   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.565030   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.565037   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.565044   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.565048   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.564998   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.565066   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.565074   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.565081   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.565087   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.565090   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.565101   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.565109   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.565113   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.565116   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.565124   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.565133   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.565140   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.565142   21011 node_ready.go:35] waiting up to 6m0s for node "addons-799058" to be "Ready" ...
	I0815 00:07:04.565088   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.565240   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.565250   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.565429   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.565583   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.565628   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.567260   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.567287   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.567301   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.567310   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.567315   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.567318   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.567323   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.567349   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.567466   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.567482   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.567502   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.567509   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.567516   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.567523   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.567575   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.567582   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.567589   21011 addons.go:475] Verifying addon registry=true in "addons-799058"
	I0815 00:07:04.568318   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.568362   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.568370   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.568378   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.568384   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.568764   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.568773   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.568786   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.568799   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.568807   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.568860   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.568882   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.568889   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.568896   21011 addons.go:475] Verifying addon metrics-server=true in "addons-799058"
	I0815 00:07:04.568924   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.568945   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.568952   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.568960   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.568967   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.569017   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.569037   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.569046   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.569053   21011 addons.go:475] Verifying addon ingress=true in "addons-799058"
	I0815 00:07:04.569457   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.569468   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.569416   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.570745   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.570753   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.570765   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.570784   21011 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-799058 service yakd-dashboard -n yakd-dashboard
	
	I0815 00:07:04.570787   21011 out.go:177] * Verifying ingress addon...
	I0815 00:07:04.570873   21011 out.go:177] * Verifying registry addon...
	I0815 00:07:04.572792   21011 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0815 00:07:04.573108   21011 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 00:07:04.578827   21011 node_ready.go:49] node "addons-799058" has status "Ready":"True"
	I0815 00:07:04.578851   21011 node_ready.go:38] duration metric: took 13.693608ms for node "addons-799058" to be "Ready" ...
	I0815 00:07:04.578869   21011 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:07:04.593983   21011 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 00:07:04.594004   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:04.604456   21011 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 00:07:04.604472   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:04.608975   21011 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-52frj" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:04.666429   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.666453   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.666675   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.666720   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.666783   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.666796   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.666804   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.666807   21011 pod_ready.go:92] pod "coredns-6f6b679f8f-52frj" in "kube-system" namespace has status "Ready":"True"
	I0815 00:07:04.666820   21011 pod_ready.go:81] duration metric: took 57.815829ms for pod "coredns-6f6b679f8f-52frj" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:04.666831   21011 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace to be "Ready" ...
	W0815 00:07:04.666876   21011 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I0815 00:07:04.667015   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.667030   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.786834   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:07:05.068636   21011 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-799058" context rescaled to 1 replicas
	I0815 00:07:05.077998   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:05.078334   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:05.578471   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:05.580581   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:06.082607   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:06.085649   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:06.584587   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:06.584943   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:06.696965   21011 pod_ready.go:102] pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:06.956373   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.316729747s)
	I0815 00:07:06.956425   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:06.956440   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:06.956455   21011 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.718516803s)
	I0815 00:07:06.956667   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.16977813s)
	I0815 00:07:06.956709   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:06.956709   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:06.956728   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:06.956750   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:06.956760   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:06.956778   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:06.956790   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:06.957016   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:06.957040   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:06.957051   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:06.957064   21011 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-799058"
	I0815 00:07:06.957101   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:06.957128   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:06.957183   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:06.957196   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:06.957204   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:06.957452   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:06.957487   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:06.957500   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:06.958288   21011 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:07:06.958331   21011 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 00:07:06.959594   21011 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 00:07:06.960471   21011 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 00:07:06.960989   21011 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 00:07:06.961016   21011 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 00:07:06.973726   21011 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 00:07:06.973756   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:07.036274   21011 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 00:07:07.036296   21011 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 00:07:07.077934   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:07.078059   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:07.082822   21011 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:07:07.082844   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 00:07:07.145438   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:07:07.465709   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:07.576877   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:07.578957   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:08.012124   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:08.090989   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:08.091369   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:08.271985   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.126506881s)
	I0815 00:07:08.272052   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:08.272072   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:08.272347   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:08.272365   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:08.272375   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:08.272383   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:08.272678   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:08.272699   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:08.274332   21011 addons.go:475] Verifying addon gcp-auth=true in "addons-799058"
	I0815 00:07:08.276249   21011 out.go:177] * Verifying gcp-auth addon...
	I0815 00:07:08.278294   21011 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 00:07:08.284564   21011 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 00:07:08.284579   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:08.464863   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:08.578366   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:08.578819   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:08.781823   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:08.964475   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:09.077993   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:09.078435   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:09.172142   21011 pod_ready.go:102] pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:09.281161   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:09.467303   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:09.577323   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:09.577476   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:09.782470   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:09.965205   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:10.077273   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:10.077899   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:10.282572   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:10.464241   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:10.577319   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:10.578290   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:10.782531   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:10.966305   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:11.076802   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:11.077417   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:11.173203   21011 pod_ready.go:102] pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:11.282042   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:11.464927   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:11.761039   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:11.761178   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:11.780927   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:11.964918   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:12.077795   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:12.080410   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:12.187628   21011 pod_ready.go:97] pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:07:12 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.195 HostIPs:[{IP:192.168.39
.195}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-15 00:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-15 00:07:00 +0000 UTC,FinishedAt:2024-08-15 00:07:09 +0000 UTC,ContainerID:cri-o://9f2106fb88c31e9899f1097bb47ec8d72e55ded06cfb6301d4466e2060bd8e73,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://9f2106fb88c31e9899f1097bb47ec8d72e55ded06cfb6301d4466e2060bd8e73 Started:0xc002140910 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001fb6d60} {Name:kube-api-access-b29kk MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001fb6d70}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0815 00:07:12.187666   21011 pod_ready.go:81] duration metric: took 7.52082575s for pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace to be "Ready" ...
	E0815 00:07:12.187682   21011 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:07:12 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.195 HostIPs:[{IP:192.168.39.195}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-15 00:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-15 00:07:00 +0000 UTC,FinishedAt:2024-08-15 00:07:09 +0000 UTC,ContainerID:cri-o://9f2106fb88c31e9899f1097bb47ec8d72e55ded06cfb6301d4466e2060bd8e73,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://9f2106fb88c31e9899f1097bb47ec8d72e55ded06cfb6301d4466e2060bd8e73 Started:0xc002140910 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001fb6d60} {Name:kube-api-access-b29kk MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001fb6d70}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0815 00:07:12.187696   21011 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.199518   21011 pod_ready.go:92] pod "etcd-addons-799058" in "kube-system" namespace has status "Ready":"True"
	I0815 00:07:12.199548   21011 pod_ready.go:81] duration metric: took 11.843509ms for pod "etcd-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.199576   21011 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.204088   21011 pod_ready.go:92] pod "kube-apiserver-addons-799058" in "kube-system" namespace has status "Ready":"True"
	I0815 00:07:12.204111   21011 pod_ready.go:81] duration metric: took 4.52618ms for pod "kube-apiserver-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.204123   21011 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.210510   21011 pod_ready.go:92] pod "kube-controller-manager-addons-799058" in "kube-system" namespace has status "Ready":"True"
	I0815 00:07:12.210535   21011 pod_ready.go:81] duration metric: took 6.403283ms for pod "kube-controller-manager-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.210550   21011 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w8m2t" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.218659   21011 pod_ready.go:92] pod "kube-proxy-w8m2t" in "kube-system" namespace has status "Ready":"True"
	I0815 00:07:12.218675   21011 pod_ready.go:81] duration metric: took 8.118325ms for pod "kube-proxy-w8m2t" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.218684   21011 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.284398   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:12.466056   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:12.572279   21011 pod_ready.go:92] pod "kube-scheduler-addons-799058" in "kube-system" namespace has status "Ready":"True"
	I0815 00:07:12.572308   21011 pod_ready.go:81] duration metric: took 353.617402ms for pod "kube-scheduler-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.572320   21011 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.578294   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:12.579889   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:12.781118   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:12.964724   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:13.080686   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:13.080908   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:13.281752   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:13.465810   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:13.577277   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:13.580219   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:13.781840   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:13.964740   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:14.077553   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:14.077917   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:14.287443   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:14.465075   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:14.577494   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:14.578193   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:14.579052   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:14.782418   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:14.967163   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:15.076622   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:15.077435   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:15.282241   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:15.465674   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:15.577035   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:15.577451   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:15.781428   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:15.965560   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:16.077455   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:16.078629   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:16.282054   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:16.465201   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:16.579651   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:16.580478   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:16.585881   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:16.782687   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:16.965601   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:17.077008   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:17.077658   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:17.282043   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:17.464856   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:17.578176   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:17.578881   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:17.781383   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:17.965066   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:18.078711   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:18.078999   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:18.282439   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:18.465629   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:18.578749   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:18.578931   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:18.782108   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:18.965042   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:19.078927   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:19.079530   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:19.081573   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:19.281693   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:19.464189   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:19.578401   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:19.578944   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:19.781449   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:19.965090   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:20.077941   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:20.078467   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:20.282563   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:20.464947   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:20.578343   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:20.578682   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:20.782414   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:20.965577   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:21.088339   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:21.089125   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:21.090959   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:21.282201   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:21.465629   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:21.577646   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:21.577730   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:21.781991   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:21.970599   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:22.079439   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:22.080641   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:22.282882   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:22.465087   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:22.576920   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:22.577893   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:22.781852   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:22.964965   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:23.078430   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:23.078668   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:23.282976   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:23.464585   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:23.577843   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:23.579195   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:23.580370   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:23.782176   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:23.965183   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:24.078738   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:24.079342   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:24.282030   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:24.466050   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:24.577166   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:24.577358   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:24.783316   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:24.965922   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:25.077907   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:25.080509   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:25.282655   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:25.465560   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:25.577512   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:25.578603   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:25.782527   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:25.965606   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:26.077474   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:26.078429   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:26.078843   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:26.282004   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:26.464748   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:26.577056   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:26.577774   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:26.781308   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:26.967663   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:27.077015   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:27.077414   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:27.282599   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:27.465476   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:27.578586   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:27.578722   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:27.782746   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:27.964548   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:28.077529   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:28.081271   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:28.082036   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:28.297322   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:28.752222   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:28.752336   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:28.752890   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:28.781339   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:28.964922   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:29.077941   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:29.079915   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:29.281949   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:29.464111   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:29.577879   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:29.579272   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:29.782060   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:29.965169   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:30.078147   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:30.078742   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:30.286424   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:30.465594   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:30.576127   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:30.577632   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:30.578942   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:30.782156   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:30.965636   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:31.076991   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:31.077657   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:31.282259   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:31.464752   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:31.577895   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:31.578692   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:31.782098   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:31.964489   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:32.078105   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:32.078212   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:32.281761   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:32.464719   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:32.579378   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:32.579957   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:32.582510   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:32.781431   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:32.966332   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:33.078137   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:33.078364   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:33.281276   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:33.465417   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:33.579620   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:33.579923   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:33.782517   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:33.965315   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:34.077878   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:34.078045   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:34.282377   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:34.466194   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:34.577626   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:34.578400   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:34.781993   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:34.966720   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:35.078423   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:35.079964   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:35.081208   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:35.281144   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:35.465396   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:35.577316   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:35.579795   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:35.781305   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:35.965208   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:36.077339   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:36.077771   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:36.281728   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:36.464904   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:36.578334   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:36.580004   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:36.781727   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:36.964482   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:37.076482   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:37.077822   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:37.282117   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:37.465395   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:37.577802   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:37.578125   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:37.578974   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:37.782004   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:37.964503   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:38.076759   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:38.077757   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:38.281814   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:38.465458   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:38.579113   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:38.579353   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:38.781406   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:38.965210   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:39.077482   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:39.078218   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:39.281959   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:39.464787   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:39.578740   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:39.578987   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:39.580355   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:39.782687   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:39.965497   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:40.078136   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:40.079489   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:40.281892   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:40.465507   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:40.577874   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:40.578161   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:40.783640   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:40.965976   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:41.078236   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:41.078469   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:41.282033   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:41.465853   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:41.576603   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:41.577165   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:41.781958   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:41.965148   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:42.076887   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:42.077628   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:42.078429   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:42.303850   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:42.469650   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:42.576633   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:42.576830   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:42.782810   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:42.965264   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:43.081196   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:43.081756   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:43.281503   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:43.466767   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:43.576735   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:43.578177   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:43.782040   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:43.964882   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:44.077517   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:44.078635   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:44.081699   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:44.281415   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:44.467905   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:44.577393   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:44.580939   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:44.781904   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:44.965305   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:45.076905   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:45.078010   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:45.282268   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:45.465952   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:45.599209   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:45.599821   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:45.782056   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:45.965356   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:46.077107   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:46.079423   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:46.281566   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:46.465856   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:46.578399   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:46.580416   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:46.582135   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:46.782298   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:46.966448   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:47.078839   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:47.079596   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:47.282685   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:47.467319   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:47.578015   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:47.579136   21011 kapi.go:107] duration metric: took 43.006026082s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 00:07:47.781731   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:47.965236   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:48.078462   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:48.282735   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:48.465088   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:48.577479   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:48.782556   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:48.965639   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:49.077751   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:49.081509   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:49.281349   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:49.465440   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:49.579165   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:49.781996   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:49.964735   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:50.076771   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:50.281399   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:50.466809   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:50.579059   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:50.782602   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:50.965557   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:51.076023   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:51.281023   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:51.465571   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:51.577081   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:51.579048   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:51.781747   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:51.965014   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:52.079202   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:52.286969   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:52.464491   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:52.577434   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:52.783079   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:52.964775   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:53.078020   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:53.281805   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:53.464744   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:53.578419   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:53.582925   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:53.782038   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:53.965526   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:54.076839   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:54.282099   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:54.465136   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:54.576998   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:54.783186   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:54.964812   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:55.078932   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:55.282536   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:55.464781   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:55.577834   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:55.781833   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:55.965016   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:56.079337   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:56.080960   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:56.281720   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:56.464740   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:56.578407   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:56.781864   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:56.964513   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:57.077352   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:57.283896   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:57.477855   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:57.582702   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:57.783238   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:57.965639   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:58.077669   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:58.282127   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:58.465722   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:58.576688   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:58.578905   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:58.781650   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:58.965262   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:59.078138   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:59.281496   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:59.465453   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:59.578234   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:59.782145   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:59.968913   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:00.077558   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:00.282004   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:00.465356   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:00.577364   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:00.579445   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:00.783653   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:00.964905   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:01.077923   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:01.281364   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:01.465080   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:01.578056   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:01.783226   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:01.965046   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:02.076770   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:02.283101   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:02.465031   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:02.578053   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:02.783032   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:02.968065   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:03.078845   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:03.080884   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:03.281944   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:03.464956   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:03.577241   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:03.781455   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:03.967084   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:04.077884   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:04.282840   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:04.465168   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:04.578045   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:04.783040   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:04.964612   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:05.076973   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:05.282319   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:05.464763   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:05.578363   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:05.580011   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:05.782557   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:05.966250   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:06.080303   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:06.287184   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:06.465373   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:06.579029   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:06.784185   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:06.965959   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:07.080912   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:07.281766   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:07.464739   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:07.586860   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:07.591870   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:07.783190   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:07.965139   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:08.077052   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:08.281640   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:08.464590   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:08.577741   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:08.782460   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:08.966645   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:09.077359   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:09.282509   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:09.465851   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:09.580560   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:09.781739   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:09.965556   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:10.077612   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:10.079023   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:10.292762   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:10.475496   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:10.584389   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:10.782010   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:11.347160   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:11.347735   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:11.347996   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:11.465652   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:11.577170   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:11.782257   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:11.966886   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:12.077638   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:12.081024   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:12.281714   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:12.465184   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:12.580287   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:12.782526   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:12.965541   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:13.080560   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:13.285347   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:13.465945   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:13.577452   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:13.781599   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:13.966266   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:14.077408   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:14.283053   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:14.464930   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:14.784360   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:14.785095   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:14.787900   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:14.966309   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:15.077043   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:15.281758   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:15.466208   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:15.577302   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:15.782098   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:15.964869   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:16.076948   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:16.281775   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:16.464856   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:16.584838   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:16.784397   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:16.968065   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:17.085864   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:17.087631   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:17.283455   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:17.465440   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:17.843373   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:17.844278   21011 kapi.go:107] duration metric: took 1m13.27148182s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 00:08:18.026509   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:18.322084   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:18.465116   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:18.782207   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:18.965359   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:19.282077   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:19.465656   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:19.579262   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:19.782089   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:19.967413   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:20.282518   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:20.465660   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:20.781599   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:20.965374   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:21.282249   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:21.465839   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:21.783458   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:21.965929   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:22.078802   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:22.285986   21011 kapi.go:107] duration metric: took 1m14.007689781s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 00:08:22.287361   21011 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-799058 cluster.
	I0815 00:08:22.288537   21011 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 00:08:22.289520   21011 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 00:08:22.466909   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:22.968802   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:23.465535   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:23.965573   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:24.479262   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:24.579533   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:24.965855   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:25.467799   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:25.965315   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:26.467500   21011 kapi.go:107] duration metric: took 1m19.507027779s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 00:08:26.469214   21011 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, nvidia-device-plugin, inspektor-gadget, metrics-server, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0815 00:08:26.470265   21011 addons.go:510] duration metric: took 1m29.893146816s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns nvidia-device-plugin inspektor-gadget metrics-server helm-tiller yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0815 00:08:26.579622   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:28.593045   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:31.078099   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:32.577596   21011 pod_ready.go:92] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"True"
	I0815 00:08:32.577616   21011 pod_ready.go:81] duration metric: took 1m20.005288968s for pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace to be "Ready" ...
	I0815 00:08:32.577636   21011 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4jqvz" in "kube-system" namespace to be "Ready" ...
	I0815 00:08:32.581788   21011 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-4jqvz" in "kube-system" namespace has status "Ready":"True"
	I0815 00:08:32.581804   21011 pod_ready.go:81] duration metric: took 4.162913ms for pod "nvidia-device-plugin-daemonset-4jqvz" in "kube-system" namespace to be "Ready" ...
	I0815 00:08:32.581822   21011 pod_ready.go:38] duration metric: took 1m28.00293987s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:08:32.581838   21011 api_server.go:52] waiting for apiserver process to appear ...
	I0815 00:08:32.581879   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:08:32.581925   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:08:32.624183   21011 cri.go:89] found id: "fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f"
	I0815 00:08:32.624203   21011 cri.go:89] found id: ""
	I0815 00:08:32.624211   21011 logs.go:276] 1 containers: [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f]
	I0815 00:08:32.624255   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:32.628117   21011 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:08:32.628164   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:08:32.664258   21011 cri.go:89] found id: "976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45"
	I0815 00:08:32.664281   21011 cri.go:89] found id: ""
	I0815 00:08:32.664294   21011 logs.go:276] 1 containers: [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45]
	I0815 00:08:32.664350   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:32.668249   21011 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:08:32.668340   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:08:32.702483   21011 cri.go:89] found id: "b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc"
	I0815 00:08:32.702506   21011 cri.go:89] found id: ""
	I0815 00:08:32.702515   21011 logs.go:276] 1 containers: [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc]
	I0815 00:08:32.702572   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:32.706214   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:08:32.706265   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:08:32.748939   21011 cri.go:89] found id: "807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559"
	I0815 00:08:32.748962   21011 cri.go:89] found id: ""
	I0815 00:08:32.748971   21011 logs.go:276] 1 containers: [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559]
	I0815 00:08:32.749019   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:32.752759   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:08:32.752805   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:08:32.787184   21011 cri.go:89] found id: "1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31"
	I0815 00:08:32.787205   21011 cri.go:89] found id: ""
	I0815 00:08:32.787215   21011 logs.go:276] 1 containers: [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31]
	I0815 00:08:32.787267   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:32.790875   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:08:32.790936   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:08:32.825094   21011 cri.go:89] found id: "169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2"
	I0815 00:08:32.825111   21011 cri.go:89] found id: ""
	I0815 00:08:32.825119   21011 logs.go:276] 1 containers: [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2]
	I0815 00:08:32.825169   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:32.829165   21011 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:08:32.829214   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:08:32.863721   21011 cri.go:89] found id: ""
	I0815 00:08:32.863747   21011 logs.go:276] 0 containers: []
	W0815 00:08:32.863759   21011 logs.go:278] No container was found matching "kindnet"
	I0815 00:08:32.863768   21011 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:08:32.863779   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:08:32.998484   21011 logs.go:123] Gathering logs for kube-apiserver [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f] ...
	I0815 00:08:32.998508   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f"
	I0815 00:08:33.044876   21011 logs.go:123] Gathering logs for etcd [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45] ...
	I0815 00:08:33.044903   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45"
	I0815 00:08:33.100460   21011 logs.go:123] Gathering logs for container status ...
	I0815 00:08:33.100488   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:08:33.154205   21011 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:08:33.154247   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:08:33.918618   21011 logs.go:123] Gathering logs for kubelet ...
	I0815 00:08:33.918663   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 00:08:34.008157   21011 logs.go:123] Gathering logs for dmesg ...
	I0815 00:08:34.008193   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:08:34.022331   21011 logs.go:123] Gathering logs for coredns [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc] ...
	I0815 00:08:34.022361   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc"
	I0815 00:08:34.055805   21011 logs.go:123] Gathering logs for kube-scheduler [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559] ...
	I0815 00:08:34.055836   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559"
	I0815 00:08:34.101793   21011 logs.go:123] Gathering logs for kube-proxy [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31] ...
	I0815 00:08:34.101818   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31"
	I0815 00:08:34.137768   21011 logs.go:123] Gathering logs for kube-controller-manager [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2] ...
	I0815 00:08:34.137790   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2"
	I0815 00:08:36.697527   21011 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:08:36.717467   21011 api_server.go:72] duration metric: took 1m40.140368673s to wait for apiserver process to appear ...
	I0815 00:08:36.717486   21011 api_server.go:88] waiting for apiserver healthz status ...
	I0815 00:08:36.717515   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:08:36.717559   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:08:36.762131   21011 cri.go:89] found id: "fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f"
	I0815 00:08:36.762157   21011 cri.go:89] found id: ""
	I0815 00:08:36.762167   21011 logs.go:276] 1 containers: [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f]
	I0815 00:08:36.762212   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:36.765968   21011 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:08:36.766024   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:08:36.800497   21011 cri.go:89] found id: "976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45"
	I0815 00:08:36.800520   21011 cri.go:89] found id: ""
	I0815 00:08:36.800530   21011 logs.go:276] 1 containers: [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45]
	I0815 00:08:36.800584   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:36.804297   21011 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:08:36.804359   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:08:36.838394   21011 cri.go:89] found id: "b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc"
	I0815 00:08:36.838411   21011 cri.go:89] found id: ""
	I0815 00:08:36.838418   21011 logs.go:276] 1 containers: [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc]
	I0815 00:08:36.838467   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:36.842170   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:08:36.842219   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:08:36.887217   21011 cri.go:89] found id: "807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559"
	I0815 00:08:36.887244   21011 cri.go:89] found id: ""
	I0815 00:08:36.887254   21011 logs.go:276] 1 containers: [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559]
	I0815 00:08:36.887306   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:36.892331   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:08:36.892398   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:08:36.933664   21011 cri.go:89] found id: "1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31"
	I0815 00:08:36.933682   21011 cri.go:89] found id: ""
	I0815 00:08:36.933690   21011 logs.go:276] 1 containers: [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31]
	I0815 00:08:36.933734   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:36.938120   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:08:36.938186   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:08:36.977242   21011 cri.go:89] found id: "169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2"
	I0815 00:08:36.977269   21011 cri.go:89] found id: ""
	I0815 00:08:36.977279   21011 logs.go:276] 1 containers: [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2]
	I0815 00:08:36.977342   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:36.981262   21011 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:08:36.981323   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:08:37.016043   21011 cri.go:89] found id: ""
	I0815 00:08:37.016069   21011 logs.go:276] 0 containers: []
	W0815 00:08:37.016077   21011 logs.go:278] No container was found matching "kindnet"
	I0815 00:08:37.016087   21011 logs.go:123] Gathering logs for kube-proxy [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31] ...
	I0815 00:08:37.016102   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31"
	I0815 00:08:37.052982   21011 logs.go:123] Gathering logs for etcd [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45] ...
	I0815 00:08:37.053007   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45"
	I0815 00:08:37.098916   21011 logs.go:123] Gathering logs for kube-scheduler [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559] ...
	I0815 00:08:37.098947   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559"
	I0815 00:08:37.143999   21011 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:08:37.144028   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:08:37.260585   21011 logs.go:123] Gathering logs for kube-apiserver [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f] ...
	I0815 00:08:37.260612   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f"
	I0815 00:08:37.312434   21011 logs.go:123] Gathering logs for coredns [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc] ...
	I0815 00:08:37.312461   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc"
	I0815 00:08:37.349486   21011 logs.go:123] Gathering logs for kube-controller-manager [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2] ...
	I0815 00:08:37.349525   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2"
	I0815 00:08:37.414647   21011 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:08:37.414680   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:08:38.264837   21011 logs.go:123] Gathering logs for container status ...
	I0815 00:08:38.264885   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:08:38.310470   21011 logs.go:123] Gathering logs for kubelet ...
	I0815 00:08:38.310500   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 00:08:38.390281   21011 logs.go:123] Gathering logs for dmesg ...
	I0815 00:08:38.390315   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:08:40.908588   21011 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0815 00:08:40.913318   21011 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0815 00:08:40.914267   21011 api_server.go:141] control plane version: v1.31.0
	I0815 00:08:40.914289   21011 api_server.go:131] duration metric: took 4.19679749s to wait for apiserver health ...
	I0815 00:08:40.914297   21011 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 00:08:40.914315   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:08:40.914364   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:08:40.955707   21011 cri.go:89] found id: "fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f"
	I0815 00:08:40.955728   21011 cri.go:89] found id: ""
	I0815 00:08:40.955735   21011 logs.go:276] 1 containers: [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f]
	I0815 00:08:40.955780   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:40.959499   21011 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:08:40.959562   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:08:41.001498   21011 cri.go:89] found id: "976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45"
	I0815 00:08:41.001517   21011 cri.go:89] found id: ""
	I0815 00:08:41.001524   21011 logs.go:276] 1 containers: [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45]
	I0815 00:08:41.001569   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:41.005598   21011 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:08:41.005638   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:08:41.044195   21011 cri.go:89] found id: "b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc"
	I0815 00:08:41.044216   21011 cri.go:89] found id: ""
	I0815 00:08:41.044226   21011 logs.go:276] 1 containers: [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc]
	I0815 00:08:41.044282   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:41.048045   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:08:41.048091   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:08:41.088199   21011 cri.go:89] found id: "807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559"
	I0815 00:08:41.088215   21011 cri.go:89] found id: ""
	I0815 00:08:41.088221   21011 logs.go:276] 1 containers: [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559]
	I0815 00:08:41.088268   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:41.092123   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:08:41.092169   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:08:41.133571   21011 cri.go:89] found id: "1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31"
	I0815 00:08:41.133596   21011 cri.go:89] found id: ""
	I0815 00:08:41.133605   21011 logs.go:276] 1 containers: [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31]
	I0815 00:08:41.133662   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:41.137878   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:08:41.137945   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:08:41.184888   21011 cri.go:89] found id: "169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2"
	I0815 00:08:41.184911   21011 cri.go:89] found id: ""
	I0815 00:08:41.184920   21011 logs.go:276] 1 containers: [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2]
	I0815 00:08:41.184980   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:41.189258   21011 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:08:41.189314   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:08:41.225843   21011 cri.go:89] found id: ""
	I0815 00:08:41.225867   21011 logs.go:276] 0 containers: []
	W0815 00:08:41.225875   21011 logs.go:278] No container was found matching "kindnet"
	I0815 00:08:41.225883   21011 logs.go:123] Gathering logs for etcd [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45] ...
	I0815 00:08:41.225893   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45"
	I0815 00:08:41.295073   21011 logs.go:123] Gathering logs for coredns [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc] ...
	I0815 00:08:41.295103   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc"
	I0815 00:08:41.337195   21011 logs.go:123] Gathering logs for kube-scheduler [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559] ...
	I0815 00:08:41.337221   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559"
	I0815 00:08:41.378765   21011 logs.go:123] Gathering logs for kube-proxy [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31] ...
	I0815 00:08:41.378792   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31"
	I0815 00:08:41.415889   21011 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:08:41.415921   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:08:42.312367   21011 logs.go:123] Gathering logs for container status ...
	I0815 00:08:42.312426   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:08:42.370567   21011 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:08:42.370599   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:08:42.511442   21011 logs.go:123] Gathering logs for dmesg ...
	I0815 00:08:42.511468   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:08:42.525611   21011 logs.go:123] Gathering logs for kube-apiserver [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f] ...
	I0815 00:08:42.525643   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f"
	I0815 00:08:42.576744   21011 logs.go:123] Gathering logs for kube-controller-manager [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2] ...
	I0815 00:08:42.576778   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2"
	I0815 00:08:42.636406   21011 logs.go:123] Gathering logs for kubelet ...
	I0815 00:08:42.636441   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 00:08:45.225407   21011 system_pods.go:59] 18 kube-system pods found
	I0815 00:08:45.225437   21011 system_pods.go:61] "coredns-6f6b679f8f-52frj" [14443991-d0d3-4971-ace5-79219c17a3a4] Running
	I0815 00:08:45.225443   21011 system_pods.go:61] "csi-hostpath-attacher-0" [07bcc102-e23b-4e0f-b36a-83560a72e91f] Running
	I0815 00:08:45.225447   21011 system_pods.go:61] "csi-hostpath-resizer-0" [ae76d226-ea7b-4fcf-8713-0cfafece3e41] Running
	I0815 00:08:45.225450   21011 system_pods.go:61] "csi-hostpathplugin-5dp4z" [d97e647b-48bd-4f97-a7a7-9212f1ed9da6] Running
	I0815 00:08:45.225453   21011 system_pods.go:61] "etcd-addons-799058" [c6cb9162-e068-4148-9d9f-41f388239eb1] Running
	I0815 00:08:45.225456   21011 system_pods.go:61] "kube-apiserver-addons-799058" [861b1168-123e-40d8-b823-f643f214aafc] Running
	I0815 00:08:45.225459   21011 system_pods.go:61] "kube-controller-manager-addons-799058" [f960d7dd-d373-4498-a4fc-9ac1fd923b96] Running
	I0815 00:08:45.225462   21011 system_pods.go:61] "kube-ingress-dns-minikube" [b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa] Running
	I0815 00:08:45.225464   21011 system_pods.go:61] "kube-proxy-w8m2t" [26a17fd3-81aa-46a5-b148-82c4e3d16273] Running
	I0815 00:08:45.225467   21011 system_pods.go:61] "kube-scheduler-addons-799058" [2785a399-481e-4950-8779-b898b5f2a900] Running
	I0815 00:08:45.225471   21011 system_pods.go:61] "metrics-server-8988944d9-q4bwq" [95a56e8f-f680-4b31-bdc3-34e9e748a9b7] Running
	I0815 00:08:45.225474   21011 system_pods.go:61] "nvidia-device-plugin-daemonset-4jqvz" [86f19320-28d1-4fc0-9865-20a09c4e891a] Running
	I0815 00:08:45.225476   21011 system_pods.go:61] "registry-6fb4cdfc84-fwfvr" [0c0970af-9934-491e-bcfa-fa54ed7e0e3e] Running
	I0815 00:08:45.225479   21011 system_pods.go:61] "registry-proxy-kq9fl" [58301448-7012-48c0-8f9b-a5da1d7ebb5b] Running
	I0815 00:08:45.225481   21011 system_pods.go:61] "snapshot-controller-56fcc65765-9j9cr" [49b196b9-2c6f-4376-b6bd-25f7bcba9b02] Running
	I0815 00:08:45.225485   21011 system_pods.go:61] "snapshot-controller-56fcc65765-bbx2t" [ce67ca25-a279-4610-af34-e7d1aeb14426] Running
	I0815 00:08:45.225487   21011 system_pods.go:61] "storage-provisioner" [1409d83f-8419-4e70-9137-80faff3e10c2] Running
	I0815 00:08:45.225492   21011 system_pods.go:61] "tiller-deploy-b48cc5f79-xd29w" [792a4027-3c8e-4383-ae2c-9615a900c9a9] Running
	I0815 00:08:45.225500   21011 system_pods.go:74] duration metric: took 4.311197977s to wait for pod list to return data ...
	I0815 00:08:45.225507   21011 default_sa.go:34] waiting for default service account to be created ...
	I0815 00:08:45.227828   21011 default_sa.go:45] found service account: "default"
	I0815 00:08:45.227846   21011 default_sa.go:55] duration metric: took 2.332119ms for default service account to be created ...
	I0815 00:08:45.227853   21011 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 00:08:45.234250   21011 system_pods.go:86] 18 kube-system pods found
	I0815 00:08:45.234274   21011 system_pods.go:89] "coredns-6f6b679f8f-52frj" [14443991-d0d3-4971-ace5-79219c17a3a4] Running
	I0815 00:08:45.234279   21011 system_pods.go:89] "csi-hostpath-attacher-0" [07bcc102-e23b-4e0f-b36a-83560a72e91f] Running
	I0815 00:08:45.234283   21011 system_pods.go:89] "csi-hostpath-resizer-0" [ae76d226-ea7b-4fcf-8713-0cfafece3e41] Running
	I0815 00:08:45.234287   21011 system_pods.go:89] "csi-hostpathplugin-5dp4z" [d97e647b-48bd-4f97-a7a7-9212f1ed9da6] Running
	I0815 00:08:45.234290   21011 system_pods.go:89] "etcd-addons-799058" [c6cb9162-e068-4148-9d9f-41f388239eb1] Running
	I0815 00:08:45.234295   21011 system_pods.go:89] "kube-apiserver-addons-799058" [861b1168-123e-40d8-b823-f643f214aafc] Running
	I0815 00:08:45.234299   21011 system_pods.go:89] "kube-controller-manager-addons-799058" [f960d7dd-d373-4498-a4fc-9ac1fd923b96] Running
	I0815 00:08:45.234303   21011 system_pods.go:89] "kube-ingress-dns-minikube" [b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa] Running
	I0815 00:08:45.234307   21011 system_pods.go:89] "kube-proxy-w8m2t" [26a17fd3-81aa-46a5-b148-82c4e3d16273] Running
	I0815 00:08:45.234310   21011 system_pods.go:89] "kube-scheduler-addons-799058" [2785a399-481e-4950-8779-b898b5f2a900] Running
	I0815 00:08:45.234314   21011 system_pods.go:89] "metrics-server-8988944d9-q4bwq" [95a56e8f-f680-4b31-bdc3-34e9e748a9b7] Running
	I0815 00:08:45.234318   21011 system_pods.go:89] "nvidia-device-plugin-daemonset-4jqvz" [86f19320-28d1-4fc0-9865-20a09c4e891a] Running
	I0815 00:08:45.234322   21011 system_pods.go:89] "registry-6fb4cdfc84-fwfvr" [0c0970af-9934-491e-bcfa-fa54ed7e0e3e] Running
	I0815 00:08:45.234325   21011 system_pods.go:89] "registry-proxy-kq9fl" [58301448-7012-48c0-8f9b-a5da1d7ebb5b] Running
	I0815 00:08:45.234334   21011 system_pods.go:89] "snapshot-controller-56fcc65765-9j9cr" [49b196b9-2c6f-4376-b6bd-25f7bcba9b02] Running
	I0815 00:08:45.234338   21011 system_pods.go:89] "snapshot-controller-56fcc65765-bbx2t" [ce67ca25-a279-4610-af34-e7d1aeb14426] Running
	I0815 00:08:45.234346   21011 system_pods.go:89] "storage-provisioner" [1409d83f-8419-4e70-9137-80faff3e10c2] Running
	I0815 00:08:45.234350   21011 system_pods.go:89] "tiller-deploy-b48cc5f79-xd29w" [792a4027-3c8e-4383-ae2c-9615a900c9a9] Running
	I0815 00:08:45.234355   21011 system_pods.go:126] duration metric: took 6.49824ms to wait for k8s-apps to be running ...
	I0815 00:08:45.234364   21011 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 00:08:45.234417   21011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:08:45.249947   21011 system_svc.go:56] duration metric: took 15.574951ms WaitForService to wait for kubelet
	I0815 00:08:45.249979   21011 kubeadm.go:582] duration metric: took 1m48.672886295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:08:45.249999   21011 node_conditions.go:102] verifying NodePressure condition ...
	I0815 00:08:45.253261   21011 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 00:08:45.253286   21011 node_conditions.go:123] node cpu capacity is 2
	I0815 00:08:45.253298   21011 node_conditions.go:105] duration metric: took 3.294781ms to run NodePressure ...
	I0815 00:08:45.253309   21011 start.go:241] waiting for startup goroutines ...
	I0815 00:08:45.253318   21011 start.go:246] waiting for cluster config update ...
	I0815 00:08:45.253333   21011 start.go:255] writing updated cluster config ...
	I0815 00:08:45.253606   21011 ssh_runner.go:195] Run: rm -f paused
	I0815 00:08:45.302478   21011 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 00:08:45.304086   21011 out.go:177] * Done! kubectl is now configured to use "addons-799058" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.848051626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680746848024848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dae24fe-0083-4df7-8472-0d27a0dea85a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.849140452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=908afb46-0d48-4d9d-8b22-1b8c2988e05a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.849291503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=908afb46-0d48-4d9d-8b22-1b8c2988e05a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.849640526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e79f8c796118e82d493dd3f3f0004ccd1dbc20302f74c98fb6ebb4bb19a9bf89,PodSandboxId:8b751f03a8aaeb6d913fcef3b55a8cb7b7d8d3adf01f79b98f9dca38194eef44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723680739397199003,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wbmmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf92d0f-e40e-458e-a372-73ebae3a84db,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7a23046585b55d48c3420a46d560ad8e2ea638f14610e1f6caab5556ae153,PodSandboxId:0bfd4e7031a9c8a54520b52c1f1f4876bdca65f1068e4b82959f432fdaf19ebd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723680597117008153,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd945a2-dba6-4274-a0e9-67190b86b7cd,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8914521ade9238fb75d858164bbe70559e5b8be3bdd47a2f6189b2e2da8c060a,PodSandboxId:d0c83e0816f9d3b95929a60f82b2b9f95e3ddf94d29e098b37f44ef8b65f3864,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723680528766647760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0f417da-11f4-4f03-8
07b-3907aa99d556,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b162c742a67fddc15fd058e1853705a94b5890c2260201bda9851660186ae28d,PodSandboxId:3c5f8df9655d3381b478a49d8a96ce2eabdef6046dc0a01901c434d83956c6ed,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723680480632986846,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tmjw9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 49a43188-62a8-436a-baf2-a45e2063afc7,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de790a53febea377b11276e0a41297b62d40f1771b20b93694b0bc964019409a,PodSandboxId:29b5deafc3b4589379f34cd8c41173ea0ac14f81ac3cb2c27ff07db84ca4aa5a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723680479906649263,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-blzdw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 585ed
d66-8be8-4d12-89c9-98f611d2c1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc86ea9d9d136c114e1071f2b92608b2d9eb48a7a30b40dea8af85e8e3f87c1d,PodSandboxId:19aaea48b156d2161b6c06f271ad0d80bcc168ef452c2747c93d353e3ad6993a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723680444729279679,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-q4bwq,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 95a56e8f-f680-4b31-bdc3-34e9e748a9b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e32777771788cec98b92b985180c1cad8b8d5fa1b5f0b9c1db94c1dbb843290,PodSandboxId:dcc54c3df9e9df0a2a9fcaccc499d8435ec40c28e5ba805799ae2676e1684a9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723680422352680082,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1409d83f-8419-4e70-9137-80faff3e10c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc,PodSandboxId:26038c7838ab4d2249cd8f79252dd1277f3320ae02c47a3f56548de014e00beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723680419438251121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-52frj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14443991-d0d3-4971-ace5-79219c17a3a4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31,PodSandboxId:e497bc0b1ae95488b150c129d9b38f44f18f7e679eb42d4974eee8b8594b5088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723680416913909138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w8m2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26a17fd3-81aa-46a5-b148-82c4e3d16273,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559,PodSandboxId:e52eba7cb561b6f015b41eb6ca94ba7f5e285dbfbfa9bacf3eb6bcda5bf57e53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a
7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723680406048028580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8e58d0bf849a27c39cec9b48b924b6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45,PodSandboxId:7d2609d4df11f6f67438aa835d1caf7e97273ef3819e1c17a740fc3de977eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,State:CONTAINER_RUNNING,CreatedAt:1723680405997226798,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5a0f5eb47e46aa4e2b3563c52b968db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f,PodSandboxId:ae520df873e65352f64bada52055f7f809db9c2806023f5bf2e7db1716cf26b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:
1723680406005709507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43f04b6198bea76ee447b0b5034bae3f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2,PodSandboxId:291a455ba1d587df3700368aa2b28f312dcc1060f41632cba6cf40882d342036,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17236804
05977363934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8687703bca7345532ca828a5340bd3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=908afb46-0d48-4d9d-8b22-1b8c2988e05a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.893945821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56504bcd-9594-429c-b257-45675e622103 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.894033145Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56504bcd-9594-429c-b257-45675e622103 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.899118133Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aebec5e0-3154-4815-89dc-c57a5fa7d748 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.900539248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680746900504510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aebec5e0-3154-4815-89dc-c57a5fa7d748 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.901293583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af2a7c57-e799-4ac2-ace2-f39048c164f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.901360996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af2a7c57-e799-4ac2-ace2-f39048c164f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.901700031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e79f8c796118e82d493dd3f3f0004ccd1dbc20302f74c98fb6ebb4bb19a9bf89,PodSandboxId:8b751f03a8aaeb6d913fcef3b55a8cb7b7d8d3adf01f79b98f9dca38194eef44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723680739397199003,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wbmmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf92d0f-e40e-458e-a372-73ebae3a84db,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7a23046585b55d48c3420a46d560ad8e2ea638f14610e1f6caab5556ae153,PodSandboxId:0bfd4e7031a9c8a54520b52c1f1f4876bdca65f1068e4b82959f432fdaf19ebd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723680597117008153,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd945a2-dba6-4274-a0e9-67190b86b7cd,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8914521ade9238fb75d858164bbe70559e5b8be3bdd47a2f6189b2e2da8c060a,PodSandboxId:d0c83e0816f9d3b95929a60f82b2b9f95e3ddf94d29e098b37f44ef8b65f3864,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723680528766647760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0f417da-11f4-4f03-8
07b-3907aa99d556,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b162c742a67fddc15fd058e1853705a94b5890c2260201bda9851660186ae28d,PodSandboxId:3c5f8df9655d3381b478a49d8a96ce2eabdef6046dc0a01901c434d83956c6ed,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723680480632986846,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tmjw9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 49a43188-62a8-436a-baf2-a45e2063afc7,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de790a53febea377b11276e0a41297b62d40f1771b20b93694b0bc964019409a,PodSandboxId:29b5deafc3b4589379f34cd8c41173ea0ac14f81ac3cb2c27ff07db84ca4aa5a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723680479906649263,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-blzdw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 585ed
d66-8be8-4d12-89c9-98f611d2c1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc86ea9d9d136c114e1071f2b92608b2d9eb48a7a30b40dea8af85e8e3f87c1d,PodSandboxId:19aaea48b156d2161b6c06f271ad0d80bcc168ef452c2747c93d353e3ad6993a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723680444729279679,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-q4bwq,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 95a56e8f-f680-4b31-bdc3-34e9e748a9b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e32777771788cec98b92b985180c1cad8b8d5fa1b5f0b9c1db94c1dbb843290,PodSandboxId:dcc54c3df9e9df0a2a9fcaccc499d8435ec40c28e5ba805799ae2676e1684a9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723680422352680082,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1409d83f-8419-4e70-9137-80faff3e10c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc,PodSandboxId:26038c7838ab4d2249cd8f79252dd1277f3320ae02c47a3f56548de014e00beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723680419438251121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-52frj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14443991-d0d3-4971-ace5-79219c17a3a4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31,PodSandboxId:e497bc0b1ae95488b150c129d9b38f44f18f7e679eb42d4974eee8b8594b5088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723680416913909138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w8m2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26a17fd3-81aa-46a5-b148-82c4e3d16273,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559,PodSandboxId:e52eba7cb561b6f015b41eb6ca94ba7f5e285dbfbfa9bacf3eb6bcda5bf57e53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a
7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723680406048028580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8e58d0bf849a27c39cec9b48b924b6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45,PodSandboxId:7d2609d4df11f6f67438aa835d1caf7e97273ef3819e1c17a740fc3de977eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,State:CONTAINER_RUNNING,CreatedAt:1723680405997226798,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5a0f5eb47e46aa4e2b3563c52b968db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f,PodSandboxId:ae520df873e65352f64bada52055f7f809db9c2806023f5bf2e7db1716cf26b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:
1723680406005709507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43f04b6198bea76ee447b0b5034bae3f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2,PodSandboxId:291a455ba1d587df3700368aa2b28f312dcc1060f41632cba6cf40882d342036,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17236804
05977363934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8687703bca7345532ca828a5340bd3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af2a7c57-e799-4ac2-ace2-f39048c164f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.942574507Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc801a7e-50da-4408-9eb2-f883f7096d60 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.942661640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc801a7e-50da-4408-9eb2-f883f7096d60 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.943824241Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01202932-6c38-47bd-a677-636f64f88b19 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.945119649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680746945091477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01202932-6c38-47bd-a677-636f64f88b19 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.945705261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47589cbd-e6f9-4856-867f-c374e011f8c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.945813480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47589cbd-e6f9-4856-867f-c374e011f8c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.946164873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e79f8c796118e82d493dd3f3f0004ccd1dbc20302f74c98fb6ebb4bb19a9bf89,PodSandboxId:8b751f03a8aaeb6d913fcef3b55a8cb7b7d8d3adf01f79b98f9dca38194eef44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723680739397199003,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wbmmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf92d0f-e40e-458e-a372-73ebae3a84db,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7a23046585b55d48c3420a46d560ad8e2ea638f14610e1f6caab5556ae153,PodSandboxId:0bfd4e7031a9c8a54520b52c1f1f4876bdca65f1068e4b82959f432fdaf19ebd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723680597117008153,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd945a2-dba6-4274-a0e9-67190b86b7cd,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8914521ade9238fb75d858164bbe70559e5b8be3bdd47a2f6189b2e2da8c060a,PodSandboxId:d0c83e0816f9d3b95929a60f82b2b9f95e3ddf94d29e098b37f44ef8b65f3864,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723680528766647760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0f417da-11f4-4f03-8
07b-3907aa99d556,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b162c742a67fddc15fd058e1853705a94b5890c2260201bda9851660186ae28d,PodSandboxId:3c5f8df9655d3381b478a49d8a96ce2eabdef6046dc0a01901c434d83956c6ed,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723680480632986846,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tmjw9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 49a43188-62a8-436a-baf2-a45e2063afc7,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de790a53febea377b11276e0a41297b62d40f1771b20b93694b0bc964019409a,PodSandboxId:29b5deafc3b4589379f34cd8c41173ea0ac14f81ac3cb2c27ff07db84ca4aa5a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723680479906649263,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-blzdw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 585ed
d66-8be8-4d12-89c9-98f611d2c1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc86ea9d9d136c114e1071f2b92608b2d9eb48a7a30b40dea8af85e8e3f87c1d,PodSandboxId:19aaea48b156d2161b6c06f271ad0d80bcc168ef452c2747c93d353e3ad6993a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723680444729279679,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-q4bwq,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 95a56e8f-f680-4b31-bdc3-34e9e748a9b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e32777771788cec98b92b985180c1cad8b8d5fa1b5f0b9c1db94c1dbb843290,PodSandboxId:dcc54c3df9e9df0a2a9fcaccc499d8435ec40c28e5ba805799ae2676e1684a9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723680422352680082,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1409d83f-8419-4e70-9137-80faff3e10c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc,PodSandboxId:26038c7838ab4d2249cd8f79252dd1277f3320ae02c47a3f56548de014e00beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723680419438251121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-52frj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14443991-d0d3-4971-ace5-79219c17a3a4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31,PodSandboxId:e497bc0b1ae95488b150c129d9b38f44f18f7e679eb42d4974eee8b8594b5088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723680416913909138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w8m2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26a17fd3-81aa-46a5-b148-82c4e3d16273,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559,PodSandboxId:e52eba7cb561b6f015b41eb6ca94ba7f5e285dbfbfa9bacf3eb6bcda5bf57e53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a
7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723680406048028580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8e58d0bf849a27c39cec9b48b924b6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45,PodSandboxId:7d2609d4df11f6f67438aa835d1caf7e97273ef3819e1c17a740fc3de977eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,State:CONTAINER_RUNNING,CreatedAt:1723680405997226798,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5a0f5eb47e46aa4e2b3563c52b968db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f,PodSandboxId:ae520df873e65352f64bada52055f7f809db9c2806023f5bf2e7db1716cf26b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:
1723680406005709507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43f04b6198bea76ee447b0b5034bae3f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2,PodSandboxId:291a455ba1d587df3700368aa2b28f312dcc1060f41632cba6cf40882d342036,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17236804
05977363934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8687703bca7345532ca828a5340bd3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47589cbd-e6f9-4856-867f-c374e011f8c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.978660043Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8409ef7a-11e6-4cd5-aaa1-2af3e8fffb8d name=/runtime.v1.RuntimeService/Version
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.978805687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8409ef7a-11e6-4cd5-aaa1-2af3e8fffb8d name=/runtime.v1.RuntimeService/Version
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.979865574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0cd5f29f-4748-49d4-9f4d-fa1010d1a194 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.981495288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680746981471074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0cd5f29f-4748-49d4-9f4d-fa1010d1a194 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.981995386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b0c85bc-f379-4b6f-ad83-ca31eea2f325 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.982061390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b0c85bc-f379-4b6f-ad83-ca31eea2f325 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:12:26 addons-799058 crio[672]: time="2024-08-15 00:12:26.982386557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e79f8c796118e82d493dd3f3f0004ccd1dbc20302f74c98fb6ebb4bb19a9bf89,PodSandboxId:8b751f03a8aaeb6d913fcef3b55a8cb7b7d8d3adf01f79b98f9dca38194eef44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723680739397199003,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wbmmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf92d0f-e40e-458e-a372-73ebae3a84db,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7a23046585b55d48c3420a46d560ad8e2ea638f14610e1f6caab5556ae153,PodSandboxId:0bfd4e7031a9c8a54520b52c1f1f4876bdca65f1068e4b82959f432fdaf19ebd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723680597117008153,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd945a2-dba6-4274-a0e9-67190b86b7cd,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8914521ade9238fb75d858164bbe70559e5b8be3bdd47a2f6189b2e2da8c060a,PodSandboxId:d0c83e0816f9d3b95929a60f82b2b9f95e3ddf94d29e098b37f44ef8b65f3864,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723680528766647760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0f417da-11f4-4f03-8
07b-3907aa99d556,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b162c742a67fddc15fd058e1853705a94b5890c2260201bda9851660186ae28d,PodSandboxId:3c5f8df9655d3381b478a49d8a96ce2eabdef6046dc0a01901c434d83956c6ed,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723680480632986846,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tmjw9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 49a43188-62a8-436a-baf2-a45e2063afc7,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de790a53febea377b11276e0a41297b62d40f1771b20b93694b0bc964019409a,PodSandboxId:29b5deafc3b4589379f34cd8c41173ea0ac14f81ac3cb2c27ff07db84ca4aa5a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723680479906649263,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-blzdw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 585ed
d66-8be8-4d12-89c9-98f611d2c1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc86ea9d9d136c114e1071f2b92608b2d9eb48a7a30b40dea8af85e8e3f87c1d,PodSandboxId:19aaea48b156d2161b6c06f271ad0d80bcc168ef452c2747c93d353e3ad6993a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723680444729279679,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-q4bwq,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 95a56e8f-f680-4b31-bdc3-34e9e748a9b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e32777771788cec98b92b985180c1cad8b8d5fa1b5f0b9c1db94c1dbb843290,PodSandboxId:dcc54c3df9e9df0a2a9fcaccc499d8435ec40c28e5ba805799ae2676e1684a9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723680422352680082,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1409d83f-8419-4e70-9137-80faff3e10c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc,PodSandboxId:26038c7838ab4d2249cd8f79252dd1277f3320ae02c47a3f56548de014e00beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723680419438251121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-52frj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14443991-d0d3-4971-ace5-79219c17a3a4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31,PodSandboxId:e497bc0b1ae95488b150c129d9b38f44f18f7e679eb42d4974eee8b8594b5088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723680416913909138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w8m2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26a17fd3-81aa-46a5-b148-82c4e3d16273,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559,PodSandboxId:e52eba7cb561b6f015b41eb6ca94ba7f5e285dbfbfa9bacf3eb6bcda5bf57e53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a
7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723680406048028580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8e58d0bf849a27c39cec9b48b924b6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45,PodSandboxId:7d2609d4df11f6f67438aa835d1caf7e97273ef3819e1c17a740fc3de977eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,State:CONTAINER_RUNNING,CreatedAt:1723680405997226798,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5a0f5eb47e46aa4e2b3563c52b968db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f,PodSandboxId:ae520df873e65352f64bada52055f7f809db9c2806023f5bf2e7db1716cf26b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:
1723680406005709507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43f04b6198bea76ee447b0b5034bae3f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2,PodSandboxId:291a455ba1d587df3700368aa2b28f312dcc1060f41632cba6cf40882d342036,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:17236804
05977363934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8687703bca7345532ca828a5340bd3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b0c85bc-f379-4b6f-ad83-ca31eea2f325 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e79f8c796118e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   8b751f03a8aae       hello-world-app-55bf9c44b4-wbmmj
	20e7a23046585       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   0bfd4e7031a9c       nginx
	8914521ade923       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   d0c83e0816f9d       busybox
	b162c742a67fd       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             4 minutes ago       Exited              patch                     1                   3c5f8df9655d3       ingress-nginx-admission-patch-tmjw9
	de790a53febea       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   29b5deafc3b45       ingress-nginx-admission-create-blzdw
	dc86ea9d9d136       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        5 minutes ago       Running             metrics-server            0                   19aaea48b156d       metrics-server-8988944d9-q4bwq
	4e32777771788       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   dcc54c3df9e9d       storage-provisioner
	b93836edc2ea0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   26038c7838ab4       coredns-6f6b679f8f-52frj
	1a5055649b6ad       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   e497bc0b1ae95       kube-proxy-w8m2t
	807c4f41537ad       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   e52eba7cb561b       kube-scheduler-addons-799058
	fcfebfef6006c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   ae520df873e65       kube-apiserver-addons-799058
	976439828fd9f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   7d2609d4df11f       etcd-addons-799058
	169699f15f7ad       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   291a455ba1d58       kube-controller-manager-addons-799058
	
	
	==> coredns [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc] <==
	[INFO] 10.244.0.7:44728 - 48913 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163735s
	[INFO] 10.244.0.7:33085 - 41949 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000162335s
	[INFO] 10.244.0.7:33085 - 26579 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000071213s
	[INFO] 10.244.0.7:42100 - 33841 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000181494s
	[INFO] 10.244.0.7:42100 - 42547 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081153s
	[INFO] 10.244.0.7:43066 - 4739 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125024s
	[INFO] 10.244.0.7:43066 - 13185 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080829s
	[INFO] 10.244.0.7:36814 - 26352 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000090989s
	[INFO] 10.244.0.7:36814 - 39148 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000036433s
	[INFO] 10.244.0.7:59349 - 13803 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000049202s
	[INFO] 10.244.0.7:59349 - 44268 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000036549s
	[INFO] 10.244.0.7:58584 - 43526 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046797s
	[INFO] 10.244.0.7:58584 - 20992 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000021114s
	[INFO] 10.244.0.7:41449 - 15767 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000034624s
	[INFO] 10.244.0.7:41449 - 45465 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000043147s
	[INFO] 10.244.0.22:34376 - 26710 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000460643s
	[INFO] 10.244.0.22:36728 - 44220 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00007705s
	[INFO] 10.244.0.22:56456 - 364 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100243s
	[INFO] 10.244.0.22:46575 - 63414 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000184612s
	[INFO] 10.244.0.22:49957 - 28793 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000144474s
	[INFO] 10.244.0.22:46582 - 43057 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121699s
	[INFO] 10.244.0.22:37055 - 23457 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000605161s
	[INFO] 10.244.0.22:51558 - 49034 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000506856s
	[INFO] 10.244.0.24:36290 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000265033s
	[INFO] 10.244.0.24:35067 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112312s
	
	
	==> describe nodes <==
	Name:               addons-799058
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-799058
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=addons-799058
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_06_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-799058
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:06:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-799058
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:12:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:10:25 +0000   Thu, 15 Aug 2024 00:06:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:10:25 +0000   Thu, 15 Aug 2024 00:06:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:10:25 +0000   Thu, 15 Aug 2024 00:06:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:10:25 +0000   Thu, 15 Aug 2024 00:06:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-799058
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a1aa125092d40769e61470729cb010e
	  System UUID:                5a1aa125-092d-4076-9e61-470729cb010e
	  Boot ID:                    b9c872b0-2204-4dd5-9cf2-48f47e734356
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  default                     hello-world-app-55bf9c44b4-wbmmj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 coredns-6f6b679f8f-52frj                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m31s
	  kube-system                 etcd-addons-799058                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m36s
	  kube-system                 kube-apiserver-addons-799058             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  kube-system                 kube-controller-manager-addons-799058    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  kube-system                 kube-proxy-w8m2t                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-scheduler-addons-799058             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 metrics-server-8988944d9-q4bwq           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m26s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m42s (x8 over 5m42s)  kubelet          Node addons-799058 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m42s (x8 over 5m42s)  kubelet          Node addons-799058 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m42s (x7 over 5m42s)  kubelet          Node addons-799058 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m36s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m36s                  kubelet          Node addons-799058 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s                  kubelet          Node addons-799058 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m36s                  kubelet          Node addons-799058 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m35s                  kubelet          Node addons-799058 status is now: NodeReady
	  Normal  RegisteredNode           5m32s                  node-controller  Node addons-799058 event: Registered Node addons-799058 in Controller
	
	
	==> dmesg <==
	[  +5.022722] kauditd_printk_skb: 138 callbacks suppressed
	[  +5.394243] kauditd_printk_skb: 57 callbacks suppressed
	[ +10.261879] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.772450] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.088064] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.252351] kauditd_printk_skb: 4 callbacks suppressed
	[Aug15 00:08] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.018775] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.954549] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.554755] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.334519] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.101248] kauditd_printk_skb: 38 callbacks suppressed
	[ +28.048616] kauditd_printk_skb: 7 callbacks suppressed
	[Aug15 00:09] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.662460] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.139342] kauditd_printk_skb: 36 callbacks suppressed
	[  +6.013746] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.186732] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.121671] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.338675] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.806288] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.724259] kauditd_printk_skb: 22 callbacks suppressed
	[Aug15 00:10] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.173074] kauditd_printk_skb: 10 callbacks suppressed
	[Aug15 00:12] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45] <==
	{"level":"info","ts":"2024-08-15T00:08:14.763956Z","caller":"traceutil/trace.go:171","msg":"trace[231408599] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-8988944d9-q4bwq; range_end:; response_count:1; response_revision:1145; }","duration":"202.576938ms","start":"2024-08-15T00:08:14.561374Z","end":"2024-08-15T00:08:14.763951Z","steps":["trace[231408599] 'agreement among raft nodes before linearized reading'  (duration: 202.502563ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:08:14.764190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.989959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-08-15T00:08:14.764210Z","caller":"traceutil/trace.go:171","msg":"trace[1205138313] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1145; }","duration":"148.011889ms","start":"2024-08-15T00:08:14.616191Z","end":"2024-08-15T00:08:14.764203Z","steps":["trace[1205138313] 'agreement among raft nodes before linearized reading'  (duration: 147.940397ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:08:17.826297Z","caller":"traceutil/trace.go:171","msg":"trace[188591949] linearizableReadLoop","detail":"{readStateIndex:1190; appliedIndex:1189; }","duration":"265.092832ms","start":"2024-08-15T00:08:17.561182Z","end":"2024-08-15T00:08:17.826275Z","steps":["trace[188591949] 'read index received'  (duration: 265.075923ms)","trace[188591949] 'applied index is now lower than readState.Index'  (duration: 16.29µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:08:17.826473Z","caller":"traceutil/trace.go:171","msg":"trace[1923140916] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"299.78416ms","start":"2024-08-15T00:08:17.526544Z","end":"2024-08-15T00:08:17.826328Z","steps":["trace[1923140916] 'process raft request'  (duration: 299.64744ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:08:17.826567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.381748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:08:17.826613Z","caller":"traceutil/trace.go:171","msg":"trace[2015704096] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1159; }","duration":"265.425357ms","start":"2024-08-15T00:08:17.561178Z","end":"2024-08-15T00:08:17.826603Z","steps":["trace[2015704096] 'agreement among raft nodes before linearized reading'  (duration: 265.330725ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:08:17.827030Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:08:17.526518Z","time spent":"300.034608ms","remote":"127.0.0.1:52584","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1134 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-08-15T00:08:17.826494Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.290901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-8988944d9-q4bwq\" ","response":"range_response_count:1 size:4561"}
	{"level":"info","ts":"2024-08-15T00:08:17.827340Z","caller":"traceutil/trace.go:171","msg":"trace[281104173] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-8988944d9-q4bwq; range_end:; response_count:1; response_revision:1159; }","duration":"266.149096ms","start":"2024-08-15T00:08:17.561181Z","end":"2024-08-15T00:08:17.827330Z","steps":["trace[281104173] 'agreement among raft nodes before linearized reading'  (duration: 265.19053ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:08:45.172795Z","caller":"traceutil/trace.go:171","msg":"trace[1109562241] linearizableReadLoop","detail":"{readStateIndex:1299; appliedIndex:1298; }","duration":"278.273234ms","start":"2024-08-15T00:08:44.894485Z","end":"2024-08-15T00:08:45.172758Z","steps":["trace[1109562241] 'read index received'  (duration: 278.088927ms)","trace[1109562241] 'applied index is now lower than readState.Index'  (duration: 183.574µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:08:45.172979Z","caller":"traceutil/trace.go:171","msg":"trace[1567391735] transaction","detail":"{read_only:false; response_revision:1263; number_of_response:1; }","duration":"343.098032ms","start":"2024-08-15T00:08:44.829862Z","end":"2024-08-15T00:08:45.172960Z","steps":["trace[1567391735] 'process raft request'  (duration: 342.758287ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:08:45.173086Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.531333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:08:45.173111Z","caller":"traceutil/trace.go:171","msg":"trace[1733662269] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1263; }","duration":"278.628069ms","start":"2024-08-15T00:08:44.894476Z","end":"2024-08-15T00:08:45.173104Z","steps":["trace[1733662269] 'agreement among raft nodes before linearized reading'  (duration: 278.512026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:08:45.173140Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:08:44.829847Z","time spent":"343.166921ms","remote":"127.0.0.1:52584","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1254 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-08-15T00:08:45.173333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.798304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-15T00:08:45.173353Z","caller":"traceutil/trace.go:171","msg":"trace[927988656] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1263; }","duration":"269.819705ms","start":"2024-08-15T00:08:44.903527Z","end":"2024-08-15T00:08:45.173346Z","steps":["trace[927988656] 'agreement among raft nodes before linearized reading'  (duration: 269.737102ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:09:22.780505Z","caller":"traceutil/trace.go:171","msg":"trace[1279385666] transaction","detail":"{read_only:false; response_revision:1470; number_of_response:1; }","duration":"200.897506ms","start":"2024-08-15T00:09:22.579587Z","end":"2024-08-15T00:09:22.780484Z","steps":["trace[1279385666] 'process raft request'  (duration: 200.810075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:09:22.781072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.864929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:1 size:883"}
	{"level":"info","ts":"2024-08-15T00:09:22.781110Z","caller":"traceutil/trace.go:171","msg":"trace[241520890] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:1; response_revision:1470; }","duration":"110.914999ms","start":"2024-08-15T00:09:22.670189Z","end":"2024-08-15T00:09:22.781104Z","steps":["trace[241520890] 'agreement among raft nodes before linearized reading'  (duration: 110.805673ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:09:22.780944Z","caller":"traceutil/trace.go:171","msg":"trace[405929852] linearizableReadLoop","detail":"{readStateIndex:1519; appliedIndex:1518; }","duration":"110.739586ms","start":"2024-08-15T00:09:22.670193Z","end":"2024-08-15T00:09:22.780932Z","steps":["trace[405929852] 'read index received'  (duration: 110.139039ms)","trace[405929852] 'applied index is now lower than readState.Index'  (duration: 599.342µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:09:22.929067Z","caller":"traceutil/trace.go:171","msg":"trace[1349584391] transaction","detail":"{read_only:false; response_revision:1471; number_of_response:1; }","duration":"146.401645ms","start":"2024-08-15T00:09:22.782652Z","end":"2024-08-15T00:09:22.929053Z","steps":["trace[1349584391] 'process raft request'  (duration: 144.805451ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:09:39.849095Z","caller":"traceutil/trace.go:171","msg":"trace[912834345] transaction","detail":"{read_only:false; response_revision:1659; number_of_response:1; }","duration":"397.754932ms","start":"2024-08-15T00:09:39.451326Z","end":"2024-08-15T00:09:39.849081Z","steps":["trace[912834345] 'process raft request'  (duration: 397.511348ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:09:39.849213Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:09:39.451307Z","time spent":"397.835978ms","remote":"127.0.0.1:52516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1641 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-15T00:10:33.462896Z","caller":"traceutil/trace.go:171","msg":"trace[352033245] transaction","detail":"{read_only:false; response_revision:1928; number_of_response:1; }","duration":"113.663358ms","start":"2024-08-15T00:10:33.349214Z","end":"2024-08-15T00:10:33.462877Z","steps":["trace[352033245] 'process raft request'  (duration: 113.547568ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:12:27 up 6 min,  0 users,  load average: 0.50, 0.82, 0.45
	Linux addons-799058 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f] <==
	I0815 00:08:32.321056       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0815 00:08:55.769007       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42540: use of closed network connection
	E0815 00:08:55.965205       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42562: use of closed network connection
	I0815 00:09:10.737834       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0815 00:09:11.777892       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0815 00:09:29.574393       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0815 00:09:35.764861       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.107.151"}
	I0815 00:09:51.090383       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:51.090463       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:51.131324       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:51.131435       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:51.133030       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:51.133076       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:51.141069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:51.141171       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:51.189368       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:51.189410       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0815 00:09:52.133941       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0815 00:09:52.189904       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0815 00:09:52.292393       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0815 00:09:52.803797       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0815 00:09:52.964637       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.230.173"}
	E0815 00:09:54.738976       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0815 00:10:01.002892       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.195:8443->10.244.0.32:54876: read: connection reset by peer
	I0815 00:12:16.810436       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.45.140"}
	
	
	==> kube-controller-manager [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2] <==
	W0815 00:11:02.060381       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:11:02.060483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:11:08.961227       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:11:08.961346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:11:12.235008       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:11:12.235134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:11:13.519616       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:11:13.519667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:11:54.033101       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:11:54.033160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:12:05.288295       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:12:05.288395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:12:07.837582       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:12:07.837799       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:12:12.230927       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:12:12.230979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 00:12:16.627368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.019737ms"
	I0815 00:12:16.641773       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.219956ms"
	I0815 00:12:16.642542       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.208µs"
	I0815 00:12:16.647897       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="76.57µs"
	I0815 00:12:19.066203       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0815 00:12:19.068932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7559cbf597" duration="5.246µs"
	I0815 00:12:19.072532       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0815 00:12:20.495383       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="7.176844ms"
	I0815 00:12:20.496216       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.484µs"
	
	
	==> kube-proxy [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 00:06:57.566316       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 00:06:57.594285       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	E0815 00:06:57.598973       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:06:57.673466       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 00:06:57.673528       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 00:06:57.673555       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:06:57.676679       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:06:57.676936       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:06:57.676947       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:06:57.683467       1 config.go:197] "Starting service config controller"
	I0815 00:06:57.683492       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:06:57.683517       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:06:57.683521       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:06:57.688459       1 config.go:326] "Starting node config controller"
	I0815 00:06:57.688470       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:06:57.785047       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:06:57.785084       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:06:57.788831       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559] <==
	W0815 00:06:48.645369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 00:06:48.645401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:48.645452       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:06:48.645476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:48.645585       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:06:48.645610       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 00:06:48.644509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 00:06:48.645776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.503442       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 00:06:49.503494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.544877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 00:06:49.544929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.567765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:06:49.567818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.679917       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:06:49.679967       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 00:06:49.681017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 00:06:49.681060       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.696782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:06:49.696846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.758045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 00:06:49.758099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.816840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 00:06:49.816933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 00:06:51.338222       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 00:12:16 addons-799058 kubelet[1228]: I0815 00:12:16.624175    1228 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc5c874-6863-4a00-aba1-0a2b4bb4462a" containerName="helm-test"
	Aug 15 00:12:16 addons-799058 kubelet[1228]: I0815 00:12:16.736626    1228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d677\" (UniqueName: \"kubernetes.io/projected/9cf92d0f-e40e-458e-a372-73ebae3a84db-kube-api-access-9d677\") pod \"hello-world-app-55bf9c44b4-wbmmj\" (UID: \"9cf92d0f-e40e-458e-a372-73ebae3a84db\") " pod="default/hello-world-app-55bf9c44b4-wbmmj"
	Aug 15 00:12:17 addons-799058 kubelet[1228]: I0815 00:12:17.848695    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whc67\" (UniqueName: \"kubernetes.io/projected/b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa-kube-api-access-whc67\") pod \"b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa\" (UID: \"b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa\") "
	Aug 15 00:12:17 addons-799058 kubelet[1228]: I0815 00:12:17.853991    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa-kube-api-access-whc67" (OuterVolumeSpecName: "kube-api-access-whc67") pod "b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa" (UID: "b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa"). InnerVolumeSpecName "kube-api-access-whc67". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 00:12:17 addons-799058 kubelet[1228]: I0815 00:12:17.949424    1228 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-whc67\" (UniqueName: \"kubernetes.io/projected/b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa-kube-api-access-whc67\") on node \"addons-799058\" DevicePath \"\""
	Aug 15 00:12:18 addons-799058 kubelet[1228]: I0815 00:12:18.460611    1228 scope.go:117] "RemoveContainer" containerID="0fabd78348bff58afa743c588d18b852c35c2d0943ac777d2f00b652c7aa806b"
	Aug 15 00:12:18 addons-799058 kubelet[1228]: I0815 00:12:18.493536    1228 scope.go:117] "RemoveContainer" containerID="0fabd78348bff58afa743c588d18b852c35c2d0943ac777d2f00b652c7aa806b"
	Aug 15 00:12:18 addons-799058 kubelet[1228]: E0815 00:12:18.494090    1228 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fabd78348bff58afa743c588d18b852c35c2d0943ac777d2f00b652c7aa806b\": container with ID starting with 0fabd78348bff58afa743c588d18b852c35c2d0943ac777d2f00b652c7aa806b not found: ID does not exist" containerID="0fabd78348bff58afa743c588d18b852c35c2d0943ac777d2f00b652c7aa806b"
	Aug 15 00:12:18 addons-799058 kubelet[1228]: I0815 00:12:18.494136    1228 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fabd78348bff58afa743c588d18b852c35c2d0943ac777d2f00b652c7aa806b"} err="failed to get container status \"0fabd78348bff58afa743c588d18b852c35c2d0943ac777d2f00b652c7aa806b\": rpc error: code = NotFound desc = could not find container \"0fabd78348bff58afa743c588d18b852c35c2d0943ac777d2f00b652c7aa806b\": container with ID starting with 0fabd78348bff58afa743c588d18b852c35c2d0943ac777d2f00b652c7aa806b not found: ID does not exist"
	Aug 15 00:12:19 addons-799058 kubelet[1228]: I0815 00:12:19.154170    1228 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49a43188-62a8-436a-baf2-a45e2063afc7" path="/var/lib/kubelet/pods/49a43188-62a8-436a-baf2-a45e2063afc7/volumes"
	Aug 15 00:12:19 addons-799058 kubelet[1228]: I0815 00:12:19.154611    1228 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585edd66-8be8-4d12-89c9-98f611d2c1d1" path="/var/lib/kubelet/pods/585edd66-8be8-4d12-89c9-98f611d2c1d1/volumes"
	Aug 15 00:12:19 addons-799058 kubelet[1228]: I0815 00:12:19.155046    1228 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa" path="/var/lib/kubelet/pods/b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa/volumes"
	Aug 15 00:12:21 addons-799058 kubelet[1228]: E0815 00:12:21.342259    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680741341920524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:12:21 addons-799058 kubelet[1228]: E0815 00:12:21.342280    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680741341920524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:12:22 addons-799058 kubelet[1228]: I0815 00:12:22.276681    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g97g\" (UniqueName: \"kubernetes.io/projected/a9c66fbd-c95f-454f-aa48-06f3c262c789-kube-api-access-6g97g\") pod \"a9c66fbd-c95f-454f-aa48-06f3c262c789\" (UID: \"a9c66fbd-c95f-454f-aa48-06f3c262c789\") "
	Aug 15 00:12:22 addons-799058 kubelet[1228]: I0815 00:12:22.276937    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9c66fbd-c95f-454f-aa48-06f3c262c789-webhook-cert\") pod \"a9c66fbd-c95f-454f-aa48-06f3c262c789\" (UID: \"a9c66fbd-c95f-454f-aa48-06f3c262c789\") "
	Aug 15 00:12:22 addons-799058 kubelet[1228]: I0815 00:12:22.284608    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c66fbd-c95f-454f-aa48-06f3c262c789-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a9c66fbd-c95f-454f-aa48-06f3c262c789" (UID: "a9c66fbd-c95f-454f-aa48-06f3c262c789"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 15 00:12:22 addons-799058 kubelet[1228]: I0815 00:12:22.286203    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c66fbd-c95f-454f-aa48-06f3c262c789-kube-api-access-6g97g" (OuterVolumeSpecName: "kube-api-access-6g97g") pod "a9c66fbd-c95f-454f-aa48-06f3c262c789" (UID: "a9c66fbd-c95f-454f-aa48-06f3c262c789"). InnerVolumeSpecName "kube-api-access-6g97g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 00:12:22 addons-799058 kubelet[1228]: I0815 00:12:22.377915    1228 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6g97g\" (UniqueName: \"kubernetes.io/projected/a9c66fbd-c95f-454f-aa48-06f3c262c789-kube-api-access-6g97g\") on node \"addons-799058\" DevicePath \"\""
	Aug 15 00:12:22 addons-799058 kubelet[1228]: I0815 00:12:22.377948    1228 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9c66fbd-c95f-454f-aa48-06f3c262c789-webhook-cert\") on node \"addons-799058\" DevicePath \"\""
	Aug 15 00:12:22 addons-799058 kubelet[1228]: I0815 00:12:22.485217    1228 scope.go:117] "RemoveContainer" containerID="8712f93950efb0a77b99077ea271777e65d5288f2e7b27e42e1bd597ee5eeb42"
	Aug 15 00:12:22 addons-799058 kubelet[1228]: I0815 00:12:22.511372    1228 scope.go:117] "RemoveContainer" containerID="8712f93950efb0a77b99077ea271777e65d5288f2e7b27e42e1bd597ee5eeb42"
	Aug 15 00:12:22 addons-799058 kubelet[1228]: E0815 00:12:22.512217    1228 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8712f93950efb0a77b99077ea271777e65d5288f2e7b27e42e1bd597ee5eeb42\": container with ID starting with 8712f93950efb0a77b99077ea271777e65d5288f2e7b27e42e1bd597ee5eeb42 not found: ID does not exist" containerID="8712f93950efb0a77b99077ea271777e65d5288f2e7b27e42e1bd597ee5eeb42"
	Aug 15 00:12:22 addons-799058 kubelet[1228]: I0815 00:12:22.512263    1228 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8712f93950efb0a77b99077ea271777e65d5288f2e7b27e42e1bd597ee5eeb42"} err="failed to get container status \"8712f93950efb0a77b99077ea271777e65d5288f2e7b27e42e1bd597ee5eeb42\": rpc error: code = NotFound desc = could not find container \"8712f93950efb0a77b99077ea271777e65d5288f2e7b27e42e1bd597ee5eeb42\": container with ID starting with 8712f93950efb0a77b99077ea271777e65d5288f2e7b27e42e1bd597ee5eeb42 not found: ID does not exist"
	Aug 15 00:12:23 addons-799058 kubelet[1228]: I0815 00:12:23.154216    1228 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c66fbd-c95f-454f-aa48-06f3c262c789" path="/var/lib/kubelet/pods/a9c66fbd-c95f-454f-aa48-06f3c262c789/volumes"
	
	
	==> storage-provisioner [4e32777771788cec98b92b985180c1cad8b8d5fa1b5f0b9c1db94c1dbb843290] <==
	I0815 00:07:03.311878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 00:07:03.406460       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 00:07:03.413546       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 00:07:03.696704       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 00:07:03.700431       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b4105451-058f-494a-a107-b03c804af7c5", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-799058_f042fa5f-4ad4-487b-a158-668d79c9351b became leader
	I0815 00:07:03.704636       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-799058_f042fa5f-4ad4-487b-a158-668d79c9351b!
	I0815 00:07:03.808788       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-799058_f042fa5f-4ad4-487b-a158-668d79c9351b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-799058 -n addons-799058
helpers_test.go:261: (dbg) Run:  kubectl --context addons-799058 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.42s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (290.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.654428ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-q4bwq" [95a56e8f-f680-4b31-bdc3-34e9e748a9b7] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002916629s
addons_test.go:417: (dbg) Run:  kubectl --context addons-799058 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-799058 top pods -n kube-system: exit status 1 (105.201978ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-52frj, age: 2m14.361649976s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-799058 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-799058 top pods -n kube-system: exit status 1 (62.71954ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-52frj, age: 2m16.602961305s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-799058 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-799058 top pods -n kube-system: exit status 1 (61.934356ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-52frj, age: 2m22.655730882s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-799058 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-799058 top pods -n kube-system: exit status 1 (60.513339ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-52frj, age: 2m30.959883012s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-799058 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-799058 top pods -n kube-system: exit status 1 (69.414599ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-52frj, age: 2m36.765214256s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-799058 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-799058 top pods -n kube-system: exit status 1 (69.065702ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-52frj, age: 2m44.890008596s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-799058 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-799058 top pods -n kube-system: exit status 1 (62.580133ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-52frj, age: 3m9.545856555s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-799058 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-799058 top pods -n kube-system: exit status 1 (68.617136ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-52frj, age: 3m29.609940666s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-799058 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-799058 top pods -n kube-system: exit status 1 (64.181345ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-52frj, age: 4m34.176972321s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-799058 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-799058 top pods -n kube-system: exit status 1 (62.437288ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-52frj, age: 5m31.466150267s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-799058 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-799058 top pods -n kube-system: exit status 1 (60.243866ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-52frj, age: 6m56.539042131s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-799058 -n addons-799058
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-799058 logs -n 25: (1.159156293s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-303162                                                                     | download-only-303162 | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC | 15 Aug 24 00:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-836990 | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC |                     |
	|         | binary-mirror-836990                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37773                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-836990                                                                     | binary-mirror-836990 | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC | 15 Aug 24 00:06 UTC |
	| addons  | disable dashboard -p                                                                        | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC |                     |
	|         | addons-799058                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC |                     |
	|         | addons-799058                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-799058 --wait=true                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:06 UTC | 15 Aug 24 00:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:08 UTC | 15 Aug 24 00:09 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | addons-799058                                                                               |                      |         |         |                     |                     |
	| ip      | addons-799058 ip                                                                            | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | -p addons-799058                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | addons-799058                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | -p addons-799058                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-799058 ssh cat                                                                       | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | /opt/local-path-provisioner/pvc-91dd3a08-78ae-4a50-9888-964894be42ae_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:10 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-799058 addons                                                                        | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-799058 addons                                                                        | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:09 UTC | 15 Aug 24 00:09 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:10 UTC | 15 Aug 24 00:10 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-799058 ssh curl -s                                                                   | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:10 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-799058 ip                                                                            | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:12 UTC | 15 Aug 24 00:12 UTC |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:12 UTC | 15 Aug 24 00:12 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-799058 addons disable                                                                | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:12 UTC | 15 Aug 24 00:12 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-799058 addons                                                                        | addons-799058        | jenkins | v1.33.1 | 15 Aug 24 00:13 UTC | 15 Aug 24 00:13 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:06:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:06:10.190820   21011 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:06:10.190906   21011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:06:10.190914   21011 out.go:304] Setting ErrFile to fd 2...
	I0815 00:06:10.190918   21011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:06:10.191060   21011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:06:10.191619   21011 out.go:298] Setting JSON to false
	I0815 00:06:10.192431   21011 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2915,"bootTime":1723677455,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:06:10.192479   21011 start.go:139] virtualization: kvm guest
	I0815 00:06:10.194676   21011 out.go:177] * [addons-799058] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:06:10.196085   21011 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:06:10.196084   21011 notify.go:220] Checking for updates...
	I0815 00:06:10.198508   21011 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:06:10.199610   21011 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:06:10.200799   21011 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:06:10.201808   21011 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:06:10.202890   21011 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:06:10.204146   21011 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:06:10.234542   21011 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 00:06:10.235591   21011 start.go:297] selected driver: kvm2
	I0815 00:06:10.235614   21011 start.go:901] validating driver "kvm2" against <nil>
	I0815 00:06:10.235625   21011 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:06:10.236242   21011 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:06:10.236300   21011 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 00:06:10.249863   21011 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 00:06:10.249899   21011 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:06:10.250117   21011 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:06:10.250186   21011 cni.go:84] Creating CNI manager for ""
	I0815 00:06:10.250201   21011 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 00:06:10.250210   21011 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 00:06:10.250268   21011 start.go:340] cluster config:
	{Name:addons-799058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-799058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:06:10.250378   21011 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:06:10.252143   21011 out.go:177] * Starting "addons-799058" primary control-plane node in "addons-799058" cluster
	I0815 00:06:10.253332   21011 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:06:10.253357   21011 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:06:10.253363   21011 cache.go:56] Caching tarball of preloaded images
	I0815 00:06:10.253461   21011 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:06:10.253476   21011 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:06:10.253784   21011 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/config.json ...
	I0815 00:06:10.253805   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/config.json: {Name:mk8ebdac0451abf719046a00b1896a9a27305305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:10.253952   21011 start.go:360] acquireMachinesLock for addons-799058: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 00:06:10.254009   21011 start.go:364] duration metric: took 40.749µs to acquireMachinesLock for "addons-799058"
	I0815 00:06:10.254029   21011 start.go:93] Provisioning new machine with config: &{Name:addons-799058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-799058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:06:10.254104   21011 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 00:06:10.255574   21011 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0815 00:06:10.255700   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:10.255747   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:10.269223   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
	I0815 00:06:10.269642   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:10.270102   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:10.270123   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:10.270485   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:10.270669   21011 main.go:141] libmachine: (addons-799058) Calling .GetMachineName
	I0815 00:06:10.270799   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:10.270922   21011 start.go:159] libmachine.API.Create for "addons-799058" (driver="kvm2")
	I0815 00:06:10.270949   21011 client.go:168] LocalClient.Create starting
	I0815 00:06:10.270985   21011 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem
	I0815 00:06:10.507109   21011 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem
	I0815 00:06:10.634737   21011 main.go:141] libmachine: Running pre-create checks...
	I0815 00:06:10.634761   21011 main.go:141] libmachine: (addons-799058) Calling .PreCreateCheck
	I0815 00:06:10.635209   21011 main.go:141] libmachine: (addons-799058) Calling .GetConfigRaw
	I0815 00:06:10.635608   21011 main.go:141] libmachine: Creating machine...
	I0815 00:06:10.635620   21011 main.go:141] libmachine: (addons-799058) Calling .Create
	I0815 00:06:10.635727   21011 main.go:141] libmachine: (addons-799058) Creating KVM machine...
	I0815 00:06:10.636869   21011 main.go:141] libmachine: (addons-799058) DBG | found existing default KVM network
	I0815 00:06:10.637556   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:10.637427   21032 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0815 00:06:10.637577   21011 main.go:141] libmachine: (addons-799058) DBG | created network xml: 
	I0815 00:06:10.637587   21011 main.go:141] libmachine: (addons-799058) DBG | <network>
	I0815 00:06:10.637594   21011 main.go:141] libmachine: (addons-799058) DBG |   <name>mk-addons-799058</name>
	I0815 00:06:10.637603   21011 main.go:141] libmachine: (addons-799058) DBG |   <dns enable='no'/>
	I0815 00:06:10.637611   21011 main.go:141] libmachine: (addons-799058) DBG |   
	I0815 00:06:10.637621   21011 main.go:141] libmachine: (addons-799058) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 00:06:10.637631   21011 main.go:141] libmachine: (addons-799058) DBG |     <dhcp>
	I0815 00:06:10.637641   21011 main.go:141] libmachine: (addons-799058) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 00:06:10.637651   21011 main.go:141] libmachine: (addons-799058) DBG |     </dhcp>
	I0815 00:06:10.637682   21011 main.go:141] libmachine: (addons-799058) DBG |   </ip>
	I0815 00:06:10.637703   21011 main.go:141] libmachine: (addons-799058) DBG |   
	I0815 00:06:10.637710   21011 main.go:141] libmachine: (addons-799058) DBG | </network>
	I0815 00:06:10.637717   21011 main.go:141] libmachine: (addons-799058) DBG | 
	I0815 00:06:10.642660   21011 main.go:141] libmachine: (addons-799058) DBG | trying to create private KVM network mk-addons-799058 192.168.39.0/24...
	I0815 00:06:10.703000   21011 main.go:141] libmachine: (addons-799058) DBG | private KVM network mk-addons-799058 192.168.39.0/24 created
	I0815 00:06:10.703036   21011 main.go:141] libmachine: (addons-799058) Setting up store path in /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058 ...
	I0815 00:06:10.703062   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:10.702929   21032 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:06:10.703078   21011 main.go:141] libmachine: (addons-799058) Building disk image from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 00:06:10.703091   21011 main.go:141] libmachine: (addons-799058) Downloading /home/jenkins/minikube-integration/19443-13088/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 00:06:10.960342   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:10.960237   21032 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa...
	I0815 00:06:11.251423   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:11.251295   21032 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/addons-799058.rawdisk...
	I0815 00:06:11.251443   21011 main.go:141] libmachine: (addons-799058) DBG | Writing magic tar header
	I0815 00:06:11.251452   21011 main.go:141] libmachine: (addons-799058) DBG | Writing SSH key tar header
	I0815 00:06:11.251465   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:11.251413   21032 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058 ...
	I0815 00:06:11.251582   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058
	I0815 00:06:11.251624   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines
	I0815 00:06:11.251637   21011 main.go:141] libmachine: (addons-799058) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058 (perms=drwx------)
	I0815 00:06:11.251657   21011 main.go:141] libmachine: (addons-799058) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines (perms=drwxr-xr-x)
	I0815 00:06:11.251668   21011 main.go:141] libmachine: (addons-799058) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube (perms=drwxr-xr-x)
	I0815 00:06:11.251686   21011 main.go:141] libmachine: (addons-799058) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088 (perms=drwxrwxr-x)
	I0815 00:06:11.251698   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:06:11.251707   21011 main.go:141] libmachine: (addons-799058) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 00:06:11.251716   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088
	I0815 00:06:11.251729   21011 main.go:141] libmachine: (addons-799058) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 00:06:11.251744   21011 main.go:141] libmachine: (addons-799058) Creating domain...
	I0815 00:06:11.251757   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 00:06:11.251767   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home/jenkins
	I0815 00:06:11.251774   21011 main.go:141] libmachine: (addons-799058) DBG | Checking permissions on dir: /home
	I0815 00:06:11.251786   21011 main.go:141] libmachine: (addons-799058) DBG | Skipping /home - not owner
	I0815 00:06:11.252608   21011 main.go:141] libmachine: (addons-799058) define libvirt domain using xml: 
	I0815 00:06:11.252638   21011 main.go:141] libmachine: (addons-799058) <domain type='kvm'>
	I0815 00:06:11.252662   21011 main.go:141] libmachine: (addons-799058)   <name>addons-799058</name>
	I0815 00:06:11.252678   21011 main.go:141] libmachine: (addons-799058)   <memory unit='MiB'>4000</memory>
	I0815 00:06:11.252688   21011 main.go:141] libmachine: (addons-799058)   <vcpu>2</vcpu>
	I0815 00:06:11.252704   21011 main.go:141] libmachine: (addons-799058)   <features>
	I0815 00:06:11.252714   21011 main.go:141] libmachine: (addons-799058)     <acpi/>
	I0815 00:06:11.252728   21011 main.go:141] libmachine: (addons-799058)     <apic/>
	I0815 00:06:11.252739   21011 main.go:141] libmachine: (addons-799058)     <pae/>
	I0815 00:06:11.252747   21011 main.go:141] libmachine: (addons-799058)     
	I0815 00:06:11.252756   21011 main.go:141] libmachine: (addons-799058)   </features>
	I0815 00:06:11.252765   21011 main.go:141] libmachine: (addons-799058)   <cpu mode='host-passthrough'>
	I0815 00:06:11.252784   21011 main.go:141] libmachine: (addons-799058)   
	I0815 00:06:11.252802   21011 main.go:141] libmachine: (addons-799058)   </cpu>
	I0815 00:06:11.252815   21011 main.go:141] libmachine: (addons-799058)   <os>
	I0815 00:06:11.252826   21011 main.go:141] libmachine: (addons-799058)     <type>hvm</type>
	I0815 00:06:11.252837   21011 main.go:141] libmachine: (addons-799058)     <boot dev='cdrom'/>
	I0815 00:06:11.252850   21011 main.go:141] libmachine: (addons-799058)     <boot dev='hd'/>
	I0815 00:06:11.252866   21011 main.go:141] libmachine: (addons-799058)     <bootmenu enable='no'/>
	I0815 00:06:11.252878   21011 main.go:141] libmachine: (addons-799058)   </os>
	I0815 00:06:11.252888   21011 main.go:141] libmachine: (addons-799058)   <devices>
	I0815 00:06:11.252899   21011 main.go:141] libmachine: (addons-799058)     <disk type='file' device='cdrom'>
	I0815 00:06:11.252916   21011 main.go:141] libmachine: (addons-799058)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/boot2docker.iso'/>
	I0815 00:06:11.252928   21011 main.go:141] libmachine: (addons-799058)       <target dev='hdc' bus='scsi'/>
	I0815 00:06:11.252938   21011 main.go:141] libmachine: (addons-799058)       <readonly/>
	I0815 00:06:11.252961   21011 main.go:141] libmachine: (addons-799058)     </disk>
	I0815 00:06:11.252973   21011 main.go:141] libmachine: (addons-799058)     <disk type='file' device='disk'>
	I0815 00:06:11.252984   21011 main.go:141] libmachine: (addons-799058)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 00:06:11.253001   21011 main.go:141] libmachine: (addons-799058)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/addons-799058.rawdisk'/>
	I0815 00:06:11.253013   21011 main.go:141] libmachine: (addons-799058)       <target dev='hda' bus='virtio'/>
	I0815 00:06:11.253024   21011 main.go:141] libmachine: (addons-799058)     </disk>
	I0815 00:06:11.253037   21011 main.go:141] libmachine: (addons-799058)     <interface type='network'>
	I0815 00:06:11.253049   21011 main.go:141] libmachine: (addons-799058)       <source network='mk-addons-799058'/>
	I0815 00:06:11.253061   21011 main.go:141] libmachine: (addons-799058)       <model type='virtio'/>
	I0815 00:06:11.253068   21011 main.go:141] libmachine: (addons-799058)     </interface>
	I0815 00:06:11.253082   21011 main.go:141] libmachine: (addons-799058)     <interface type='network'>
	I0815 00:06:11.253098   21011 main.go:141] libmachine: (addons-799058)       <source network='default'/>
	I0815 00:06:11.253110   21011 main.go:141] libmachine: (addons-799058)       <model type='virtio'/>
	I0815 00:06:11.253121   21011 main.go:141] libmachine: (addons-799058)     </interface>
	I0815 00:06:11.253133   21011 main.go:141] libmachine: (addons-799058)     <serial type='pty'>
	I0815 00:06:11.253143   21011 main.go:141] libmachine: (addons-799058)       <target port='0'/>
	I0815 00:06:11.253153   21011 main.go:141] libmachine: (addons-799058)     </serial>
	I0815 00:06:11.253167   21011 main.go:141] libmachine: (addons-799058)     <console type='pty'>
	I0815 00:06:11.253183   21011 main.go:141] libmachine: (addons-799058)       <target type='serial' port='0'/>
	I0815 00:06:11.253194   21011 main.go:141] libmachine: (addons-799058)     </console>
	I0815 00:06:11.253206   21011 main.go:141] libmachine: (addons-799058)     <rng model='virtio'>
	I0815 00:06:11.253215   21011 main.go:141] libmachine: (addons-799058)       <backend model='random'>/dev/random</backend>
	I0815 00:06:11.253227   21011 main.go:141] libmachine: (addons-799058)     </rng>
	I0815 00:06:11.253239   21011 main.go:141] libmachine: (addons-799058)     
	I0815 00:06:11.253250   21011 main.go:141] libmachine: (addons-799058)     
	I0815 00:06:11.253259   21011 main.go:141] libmachine: (addons-799058)   </devices>
	I0815 00:06:11.253268   21011 main.go:141] libmachine: (addons-799058) </domain>
	I0815 00:06:11.253278   21011 main.go:141] libmachine: (addons-799058) 
	I0815 00:06:11.258761   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:c4:4c:bc in network default
	I0815 00:06:11.259268   21011 main.go:141] libmachine: (addons-799058) Ensuring networks are active...
	I0815 00:06:11.259294   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:11.259887   21011 main.go:141] libmachine: (addons-799058) Ensuring network default is active
	I0815 00:06:11.260117   21011 main.go:141] libmachine: (addons-799058) Ensuring network mk-addons-799058 is active
	I0815 00:06:11.260544   21011 main.go:141] libmachine: (addons-799058) Getting domain xml...
	I0815 00:06:11.261240   21011 main.go:141] libmachine: (addons-799058) Creating domain...
	I0815 00:06:12.861274   21011 main.go:141] libmachine: (addons-799058) Waiting to get IP...
	I0815 00:06:12.862014   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:12.862395   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:12.862441   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:12.862386   21032 retry.go:31] will retry after 269.705346ms: waiting for machine to come up
	I0815 00:06:13.133747   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:13.134124   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:13.134150   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:13.134088   21032 retry.go:31] will retry after 277.095287ms: waiting for machine to come up
	I0815 00:06:13.412503   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:13.412952   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:13.412984   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:13.412932   21032 retry.go:31] will retry after 404.245054ms: waiting for machine to come up
	I0815 00:06:13.818206   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:13.818662   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:13.818687   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:13.818599   21032 retry.go:31] will retry after 475.920955ms: waiting for machine to come up
	I0815 00:06:14.296251   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:14.296718   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:14.296747   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:14.296679   21032 retry.go:31] will retry after 541.891693ms: waiting for machine to come up
	I0815 00:06:14.840411   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:14.840884   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:14.840914   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:14.840835   21032 retry.go:31] will retry after 580.924582ms: waiting for machine to come up
	I0815 00:06:15.422974   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:15.423337   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:15.423360   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:15.423292   21032 retry.go:31] will retry after 737.223719ms: waiting for machine to come up
	I0815 00:06:16.161984   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:16.162317   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:16.162342   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:16.162294   21032 retry.go:31] will retry after 1.183276904s: waiting for machine to come up
	I0815 00:06:17.347441   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:17.347844   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:17.347865   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:17.347806   21032 retry.go:31] will retry after 1.210237149s: waiting for machine to come up
	I0815 00:06:18.560280   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:18.560748   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:18.560767   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:18.560710   21032 retry.go:31] will retry after 1.864257604s: waiting for machine to come up
	I0815 00:06:20.426824   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:20.427224   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:20.427251   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:20.427191   21032 retry.go:31] will retry after 2.012133674s: waiting for machine to come up
	I0815 00:06:22.441669   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:22.442169   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:22.442197   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:22.442123   21032 retry.go:31] will retry after 2.238688406s: waiting for machine to come up
	I0815 00:06:24.683348   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:24.683813   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:24.683837   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:24.683746   21032 retry.go:31] will retry after 4.140150604s: waiting for machine to come up
	I0815 00:06:28.827790   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:28.828251   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find current IP address of domain addons-799058 in network mk-addons-799058
	I0815 00:06:28.828282   21011 main.go:141] libmachine: (addons-799058) DBG | I0815 00:06:28.828191   21032 retry.go:31] will retry after 5.464126204s: waiting for machine to come up
	I0815 00:06:34.296492   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.296981   21011 main.go:141] libmachine: (addons-799058) Found IP for machine: 192.168.39.195
	I0815 00:06:34.296999   21011 main.go:141] libmachine: (addons-799058) Reserving static IP address...
	I0815 00:06:34.297023   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has current primary IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.297414   21011 main.go:141] libmachine: (addons-799058) DBG | unable to find host DHCP lease matching {name: "addons-799058", mac: "52:54:00:e5:8d:47", ip: "192.168.39.195"} in network mk-addons-799058
	I0815 00:06:34.366887   21011 main.go:141] libmachine: (addons-799058) DBG | Getting to WaitForSSH function...
	I0815 00:06:34.366920   21011 main.go:141] libmachine: (addons-799058) Reserved static IP address: 192.168.39.195
	I0815 00:06:34.366967   21011 main.go:141] libmachine: (addons-799058) Waiting for SSH to be available...
	I0815 00:06:34.369425   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.369802   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.369829   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.370007   21011 main.go:141] libmachine: (addons-799058) DBG | Using SSH client type: external
	I0815 00:06:34.370046   21011 main.go:141] libmachine: (addons-799058) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa (-rw-------)
	I0815 00:06:34.370083   21011 main.go:141] libmachine: (addons-799058) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 00:06:34.370097   21011 main.go:141] libmachine: (addons-799058) DBG | About to run SSH command:
	I0815 00:06:34.370112   21011 main.go:141] libmachine: (addons-799058) DBG | exit 0
	I0815 00:06:34.500705   21011 main.go:141] libmachine: (addons-799058) DBG | SSH cmd err, output: <nil>: 
	I0815 00:06:34.501016   21011 main.go:141] libmachine: (addons-799058) KVM machine creation complete!
	I0815 00:06:34.501349   21011 main.go:141] libmachine: (addons-799058) Calling .GetConfigRaw
	I0815 00:06:34.501890   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:34.502104   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:34.502291   21011 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 00:06:34.502312   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:34.503609   21011 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 00:06:34.503627   21011 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 00:06:34.503639   21011 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 00:06:34.503646   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:34.506083   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.506469   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.506491   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.506543   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:34.506724   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.506864   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.507075   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:34.507241   21011 main.go:141] libmachine: Using SSH client type: native
	I0815 00:06:34.507413   21011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0815 00:06:34.507424   21011 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 00:06:34.607449   21011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:06:34.607472   21011 main.go:141] libmachine: Detecting the provisioner...
	I0815 00:06:34.607480   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:34.609946   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.610248   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.610281   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.610394   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:34.610567   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.610736   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.610864   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:34.611018   21011 main.go:141] libmachine: Using SSH client type: native
	I0815 00:06:34.611247   21011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0815 00:06:34.611264   21011 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 00:06:34.712976   21011 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 00:06:34.713063   21011 main.go:141] libmachine: found compatible host: buildroot
	I0815 00:06:34.713070   21011 main.go:141] libmachine: Provisioning with buildroot...
	I0815 00:06:34.713077   21011 main.go:141] libmachine: (addons-799058) Calling .GetMachineName
	I0815 00:06:34.713337   21011 buildroot.go:166] provisioning hostname "addons-799058"
	I0815 00:06:34.713374   21011 main.go:141] libmachine: (addons-799058) Calling .GetMachineName
	I0815 00:06:34.713534   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:34.716021   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.716314   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.716338   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.716506   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:34.716700   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.716856   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.716995   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:34.717159   21011 main.go:141] libmachine: Using SSH client type: native
	I0815 00:06:34.717309   21011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0815 00:06:34.717320   21011 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-799058 && echo "addons-799058" | sudo tee /etc/hostname
	I0815 00:06:34.828895   21011 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-799058
	
	I0815 00:06:34.828921   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:34.831482   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.831877   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.831906   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.832057   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:34.832211   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.832396   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:34.832519   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:34.832703   21011 main.go:141] libmachine: Using SSH client type: native
	I0815 00:06:34.832871   21011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0815 00:06:34.832893   21011 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-799058' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-799058/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-799058' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:06:34.940050   21011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:06:34.940083   21011 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 00:06:34.940110   21011 buildroot.go:174] setting up certificates
	I0815 00:06:34.940125   21011 provision.go:84] configureAuth start
	I0815 00:06:34.940134   21011 main.go:141] libmachine: (addons-799058) Calling .GetMachineName
	I0815 00:06:34.940372   21011 main.go:141] libmachine: (addons-799058) Calling .GetIP
	I0815 00:06:34.942815   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.943139   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.943167   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.943326   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:34.945351   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.945694   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:34.945720   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:34.945846   21011 provision.go:143] copyHostCerts
	I0815 00:06:34.945917   21011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 00:06:34.946041   21011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 00:06:34.946121   21011 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 00:06:34.946187   21011 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.addons-799058 san=[127.0.0.1 192.168.39.195 addons-799058 localhost minikube]
	I0815 00:06:35.144674   21011 provision.go:177] copyRemoteCerts
	I0815 00:06:35.144743   21011 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:06:35.144771   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:35.147413   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.147693   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.147719   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.147910   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:35.148113   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.148231   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:35.148366   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:35.226572   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 00:06:35.248541   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:06:35.269897   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 00:06:35.291544   21011 provision.go:87] duration metric: took 351.409181ms to configureAuth
	I0815 00:06:35.291568   21011 buildroot.go:189] setting minikube options for container-runtime
	I0815 00:06:35.291741   21011 config.go:182] Loaded profile config "addons-799058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:06:35.291813   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:35.294511   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.294825   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.294849   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.294999   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:35.295233   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.295390   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.295526   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:35.295676   21011 main.go:141] libmachine: Using SSH client type: native
	I0815 00:06:35.295830   21011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0815 00:06:35.295845   21011 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:06:35.552944   21011 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:06:35.552978   21011 main.go:141] libmachine: Checking connection to Docker...
	I0815 00:06:35.552990   21011 main.go:141] libmachine: (addons-799058) Calling .GetURL
	I0815 00:06:35.554503   21011 main.go:141] libmachine: (addons-799058) DBG | Using libvirt version 6000000
	I0815 00:06:35.556782   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.557162   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.557191   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.557356   21011 main.go:141] libmachine: Docker is up and running!
	I0815 00:06:35.557376   21011 main.go:141] libmachine: Reticulating splines...
	I0815 00:06:35.557383   21011 client.go:171] duration metric: took 25.286426747s to LocalClient.Create
	I0815 00:06:35.557404   21011 start.go:167] duration metric: took 25.286481251s to libmachine.API.Create "addons-799058"
	I0815 00:06:35.557417   21011 start.go:293] postStartSetup for "addons-799058" (driver="kvm2")
	I0815 00:06:35.557436   21011 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:06:35.557454   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:35.557707   21011 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:06:35.557732   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:35.560242   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.560673   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.560698   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.560840   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:35.561010   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.561159   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:35.561289   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:35.642584   21011 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:06:35.646522   21011 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 00:06:35.646544   21011 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 00:06:35.646621   21011 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 00:06:35.646647   21011 start.go:296] duration metric: took 89.218187ms for postStartSetup
	I0815 00:06:35.646679   21011 main.go:141] libmachine: (addons-799058) Calling .GetConfigRaw
	I0815 00:06:35.647207   21011 main.go:141] libmachine: (addons-799058) Calling .GetIP
	I0815 00:06:35.649533   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.649822   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.649848   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.650047   21011 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/config.json ...
	I0815 00:06:35.650216   21011 start.go:128] duration metric: took 25.396100957s to createHost
	I0815 00:06:35.650237   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:35.652512   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.652785   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.652812   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.652963   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:35.653132   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.653267   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.653400   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:35.653534   21011 main.go:141] libmachine: Using SSH client type: native
	I0815 00:06:35.653734   21011 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0815 00:06:35.653749   21011 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 00:06:35.752917   21011 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723680395.730873362
	
	I0815 00:06:35.752935   21011 fix.go:216] guest clock: 1723680395.730873362
	I0815 00:06:35.752942   21011 fix.go:229] Guest: 2024-08-15 00:06:35.730873362 +0000 UTC Remote: 2024-08-15 00:06:35.650227152 +0000 UTC m=+25.491307107 (delta=80.64621ms)
	I0815 00:06:35.752981   21011 fix.go:200] guest clock delta is within tolerance: 80.64621ms
	I0815 00:06:35.752987   21011 start.go:83] releasing machines lock for "addons-799058", held for 25.498966551s
	I0815 00:06:35.753006   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:35.753269   21011 main.go:141] libmachine: (addons-799058) Calling .GetIP
	I0815 00:06:35.755785   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.756172   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.756200   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.756311   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:35.756759   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:35.756931   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:35.757027   21011 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:06:35.757076   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:35.757136   21011 ssh_runner.go:195] Run: cat /version.json
	I0815 00:06:35.757160   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:35.759665   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.759989   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.760016   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.760034   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.760181   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:35.760335   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.760407   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:35.760435   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:35.760505   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:35.760610   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:35.760694   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:35.760850   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:35.761011   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:35.761143   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:35.879186   21011 ssh_runner.go:195] Run: systemctl --version
	I0815 00:06:35.885448   21011 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:06:36.044090   21011 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 00:06:36.049846   21011 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 00:06:36.049905   21011 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:06:36.064232   21011 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 00:06:36.064254   21011 start.go:495] detecting cgroup driver to use...
	I0815 00:06:36.064305   21011 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:06:36.078926   21011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:06:36.092167   21011 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:06:36.092219   21011 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:06:36.105009   21011 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:06:36.117801   21011 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:06:36.230456   21011 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:06:36.368784   21011 docker.go:233] disabling docker service ...
	I0815 00:06:36.368854   21011 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:06:36.383097   21011 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:06:36.395202   21011 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:06:36.529505   21011 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:06:36.646399   21011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:06:36.658932   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:06:36.676100   21011 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:06:36.676179   21011 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.685818   21011 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:06:36.685886   21011 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.695388   21011 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.704858   21011 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.714417   21011 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:06:36.723945   21011 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.733195   21011 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.748766   21011 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:06:36.758117   21011 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:06:36.766482   21011 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 00:06:36.766534   21011 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 00:06:36.777972   21011 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:06:36.786465   21011 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:06:36.898183   21011 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:06:37.025230   21011 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:06:37.025322   21011 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:06:37.029933   21011 start.go:563] Will wait 60s for crictl version
	I0815 00:06:37.030005   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:06:37.033417   21011 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:06:37.072396   21011 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 00:06:37.072508   21011 ssh_runner.go:195] Run: crio --version
	I0815 00:06:37.098595   21011 ssh_runner.go:195] Run: crio --version
	I0815 00:06:37.124731   21011 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 00:06:37.125917   21011 main.go:141] libmachine: (addons-799058) Calling .GetIP
	I0815 00:06:37.128483   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:37.128946   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:37.128974   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:37.129162   21011 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 00:06:37.133185   21011 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:06:37.144483   21011 kubeadm.go:883] updating cluster {Name:addons-799058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-799058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:06:37.144585   21011 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:06:37.144625   21011 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:06:37.174107   21011 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 00:06:37.174176   21011 ssh_runner.go:195] Run: which lz4
	I0815 00:06:37.177693   21011 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 00:06:37.181238   21011 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 00:06:37.181263   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 00:06:38.242706   21011 crio.go:462] duration metric: took 1.065040637s to copy over tarball
	I0815 00:06:38.242788   21011 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 00:06:40.288709   21011 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.045888537s)
	I0815 00:06:40.288737   21011 crio.go:469] duration metric: took 2.046004098s to extract the tarball
	I0815 00:06:40.288744   21011 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 00:06:40.324163   21011 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:06:40.361857   21011 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:06:40.361877   21011 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:06:40.361884   21011 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.31.0 crio true true} ...
	I0815 00:06:40.362002   21011 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-799058 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-799058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:06:40.362076   21011 ssh_runner.go:195] Run: crio config
	I0815 00:06:40.408991   21011 cni.go:84] Creating CNI manager for ""
	I0815 00:06:40.409007   21011 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 00:06:40.409015   21011 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:06:40.409035   21011 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-799058 NodeName:addons-799058 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:06:40.409185   21011 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-799058"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:06:40.409254   21011 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:06:40.418321   21011 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:06:40.418379   21011 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 00:06:40.427030   21011 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0815 00:06:40.442309   21011 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:06:40.457211   21011 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0815 00:06:40.472017   21011 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I0815 00:06:40.475430   21011 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:06:40.486026   21011 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:06:40.604111   21011 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:06:40.619831   21011 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058 for IP: 192.168.39.195
	I0815 00:06:40.619860   21011 certs.go:194] generating shared ca certs ...
	I0815 00:06:40.619880   21011 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:40.620036   21011 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 00:06:40.825973   21011 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt ...
	I0815 00:06:40.826000   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt: {Name:mkd3e103dfde5f206ead9a3e4d8372a081099209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:40.826158   21011 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key ...
	I0815 00:06:40.826175   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key: {Name:mk858692bd11cbc88063c41a856d1ac58611345d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:40.826248   21011 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 00:06:40.997336   21011 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt ...
	I0815 00:06:40.997366   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt: {Name:mke403b2a0c9b8a48d4da4e9d029de98a1d02c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:40.997535   21011 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key ...
	I0815 00:06:40.997546   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key: {Name:mkea6fc1db5986e1d892c17d1aa0b30b9bc24b7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:40.997615   21011 certs.go:256] generating profile certs ...
	I0815 00:06:40.997671   21011 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.key
	I0815 00:06:40.997685   21011 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt with IP's: []
	I0815 00:06:41.047187   21011 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt ...
	I0815 00:06:41.047213   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: {Name:mkc8ff87590ba027b7b2e49b84053e4ac4e7196b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:41.047363   21011 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.key ...
	I0815 00:06:41.047373   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.key: {Name:mk68c7c40f8d859acb7013258245941eb8d6c252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:41.047444   21011 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.key.1f59b016
	I0815 00:06:41.047462   21011 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.crt.1f59b016 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I0815 00:06:41.400706   21011 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.crt.1f59b016 ...
	I0815 00:06:41.400740   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.crt.1f59b016: {Name:mk883b5f3f1cc11cbbc4632f9f43ffe1babbaa44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:41.400899   21011 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.key.1f59b016 ...
	I0815 00:06:41.400912   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.key.1f59b016: {Name:mk65edcec3fd42e0963f07457048457a5f14bf3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:41.400996   21011 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.crt.1f59b016 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.crt
	I0815 00:06:41.401065   21011 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.key.1f59b016 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.key
	I0815 00:06:41.401110   21011 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.key
	I0815 00:06:41.401127   21011 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.crt with IP's: []
	I0815 00:06:41.537178   21011 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.crt ...
	I0815 00:06:41.537206   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.crt: {Name:mk5198a1a578e019397de305f73cca9eca2115fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:41.537368   21011 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.key ...
	I0815 00:06:41.537379   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.key: {Name:mk5d87737b41328f6b5573db35e9853260839abb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:41.537534   21011 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:06:41.537565   21011 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:06:41.537587   21011 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:06:41.537610   21011 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 00:06:41.538144   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:06:41.562161   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:06:41.583393   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:06:41.604191   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:06:41.624431   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 00:06:41.644971   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:06:41.666593   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:06:41.687172   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 00:06:41.708080   21011 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:06:41.729273   21011 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:06:41.743748   21011 ssh_runner.go:195] Run: openssl version
	I0815 00:06:41.748708   21011 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:06:41.758100   21011 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:06:41.761839   21011 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:06:41.761891   21011 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:06:41.766903   21011 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:06:41.776338   21011 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:06:41.779792   21011 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:06:41.779842   21011 kubeadm.go:392] StartCluster: {Name:addons-799058 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-799058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:06:41.779926   21011 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 00:06:41.779979   21011 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:06:41.813703   21011 cri.go:89] found id: ""
	I0815 00:06:41.813763   21011 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 00:06:41.822906   21011 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 00:06:41.831651   21011 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 00:06:41.840200   21011 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 00:06:41.840216   21011 kubeadm.go:157] found existing configuration files:
	
	I0815 00:06:41.840249   21011 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 00:06:41.848195   21011 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 00:06:41.848260   21011 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 00:06:41.858131   21011 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 00:06:41.866020   21011 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 00:06:41.866061   21011 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 00:06:41.874217   21011 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 00:06:41.882165   21011 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 00:06:41.882218   21011 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 00:06:41.890419   21011 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 00:06:41.898707   21011 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 00:06:41.898782   21011 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 00:06:41.906860   21011 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 00:06:41.953494   21011 kubeadm.go:310] W0815 00:06:41.937366     825 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:06:41.954210   21011 kubeadm.go:310] W0815 00:06:41.938125     825 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:06:42.060208   21011 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 00:06:51.849218   21011 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 00:06:51.849285   21011 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 00:06:51.849368   21011 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 00:06:51.849519   21011 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 00:06:51.849629   21011 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 00:06:51.849688   21011 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 00:06:51.851387   21011 out.go:204]   - Generating certificates and keys ...
	I0815 00:06:51.851475   21011 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 00:06:51.851532   21011 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 00:06:51.851588   21011 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 00:06:51.851636   21011 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 00:06:51.851697   21011 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 00:06:51.851795   21011 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 00:06:51.851866   21011 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 00:06:51.852029   21011 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-799058 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0815 00:06:51.852082   21011 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 00:06:51.852185   21011 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-799058 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0815 00:06:51.852264   21011 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 00:06:51.852354   21011 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 00:06:51.852423   21011 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 00:06:51.852499   21011 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 00:06:51.852579   21011 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 00:06:51.852638   21011 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 00:06:51.852721   21011 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 00:06:51.852798   21011 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 00:06:51.852851   21011 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 00:06:51.852934   21011 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 00:06:51.853028   21011 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 00:06:51.854515   21011 out.go:204]   - Booting up control plane ...
	I0815 00:06:51.854610   21011 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 00:06:51.854711   21011 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 00:06:51.854780   21011 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 00:06:51.854869   21011 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 00:06:51.854957   21011 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 00:06:51.855007   21011 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 00:06:51.855150   21011 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 00:06:51.855251   21011 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 00:06:51.855303   21011 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.262567ms
	I0815 00:06:51.855365   21011 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 00:06:51.855423   21011 kubeadm.go:310] [api-check] The API server is healthy after 5.002286644s
	I0815 00:06:51.855518   21011 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 00:06:51.855625   21011 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 00:06:51.855676   21011 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 00:06:51.855830   21011 kubeadm.go:310] [mark-control-plane] Marking the node addons-799058 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 00:06:51.855878   21011 kubeadm.go:310] [bootstrap-token] Using token: r61chi.auagym2grvm1kzxt
	I0815 00:06:51.857298   21011 out.go:204]   - Configuring RBAC rules ...
	I0815 00:06:51.857387   21011 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 00:06:51.857482   21011 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 00:06:51.857679   21011 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 00:06:51.857802   21011 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 00:06:51.857931   21011 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 00:06:51.858002   21011 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 00:06:51.858129   21011 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 00:06:51.858170   21011 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 00:06:51.858221   21011 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 00:06:51.858230   21011 kubeadm.go:310] 
	I0815 00:06:51.858310   21011 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 00:06:51.858319   21011 kubeadm.go:310] 
	I0815 00:06:51.858400   21011 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 00:06:51.858410   21011 kubeadm.go:310] 
	I0815 00:06:51.858435   21011 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 00:06:51.858501   21011 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 00:06:51.858555   21011 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 00:06:51.858561   21011 kubeadm.go:310] 
	I0815 00:06:51.858607   21011 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 00:06:51.858616   21011 kubeadm.go:310] 
	I0815 00:06:51.858661   21011 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 00:06:51.858667   21011 kubeadm.go:310] 
	I0815 00:06:51.858722   21011 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 00:06:51.858785   21011 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 00:06:51.858873   21011 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 00:06:51.858883   21011 kubeadm.go:310] 
	I0815 00:06:51.858997   21011 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 00:06:51.859064   21011 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 00:06:51.859070   21011 kubeadm.go:310] 
	I0815 00:06:51.859197   21011 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r61chi.auagym2grvm1kzxt \
	I0815 00:06:51.859334   21011 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 00:06:51.859354   21011 kubeadm.go:310] 	--control-plane 
	I0815 00:06:51.859360   21011 kubeadm.go:310] 
	I0815 00:06:51.859468   21011 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 00:06:51.859476   21011 kubeadm.go:310] 
	I0815 00:06:51.859550   21011 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r61chi.auagym2grvm1kzxt \
	I0815 00:06:51.859643   21011 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 00:06:51.859658   21011 cni.go:84] Creating CNI manager for ""
	I0815 00:06:51.859669   21011 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 00:06:51.861173   21011 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 00:06:51.862308   21011 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 00:06:51.876287   21011 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 00:06:51.892502   21011 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 00:06:51.892542   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:51.892591   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-799058 minikube.k8s.io/updated_at=2024_08_15T00_06_51_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=addons-799058 minikube.k8s.io/primary=true
	I0815 00:06:51.916485   21011 ops.go:34] apiserver oom_adj: -16
	I0815 00:06:52.006970   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:52.507109   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:53.007408   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:53.507652   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:54.008006   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:54.507969   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:55.007394   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:55.507114   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:56.007582   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:56.508018   21011 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:06:56.576312   21011 kubeadm.go:1113] duration metric: took 4.683829214s to wait for elevateKubeSystemPrivileges
	I0815 00:06:56.576350   21011 kubeadm.go:394] duration metric: took 14.796511743s to StartCluster
	I0815 00:06:56.576378   21011 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:56.576499   21011 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:06:56.576857   21011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:06:56.577031   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 00:06:56.577070   21011 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:06:56.577118   21011 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 00:06:56.577208   21011 addons.go:69] Setting yakd=true in profile "addons-799058"
	I0815 00:06:56.577229   21011 addons.go:69] Setting inspektor-gadget=true in profile "addons-799058"
	I0815 00:06:56.577242   21011 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-799058"
	I0815 00:06:56.577250   21011 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-799058"
	I0815 00:06:56.577241   21011 addons.go:69] Setting storage-provisioner=true in profile "addons-799058"
	I0815 00:06:56.577275   21011 addons.go:69] Setting volumesnapshots=true in profile "addons-799058"
	I0815 00:06:56.577283   21011 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-799058"
	I0815 00:06:56.577284   21011 addons.go:69] Setting ingress-dns=true in profile "addons-799058"
	I0815 00:06:56.577289   21011 addons.go:69] Setting cloud-spanner=true in profile "addons-799058"
	I0815 00:06:56.577289   21011 config.go:182] Loaded profile config "addons-799058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:06:56.577300   21011 addons.go:234] Setting addon volumesnapshots=true in "addons-799058"
	I0815 00:06:56.577301   21011 addons.go:69] Setting registry=true in profile "addons-799058"
	I0815 00:06:56.577310   21011 addons.go:69] Setting metrics-server=true in profile "addons-799058"
	I0815 00:06:56.577313   21011 addons.go:234] Setting addon cloud-spanner=true in "addons-799058"
	I0815 00:06:56.577319   21011 addons.go:234] Setting addon registry=true in "addons-799058"
	I0815 00:06:56.577326   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577328   21011 addons.go:234] Setting addon metrics-server=true in "addons-799058"
	I0815 00:06:56.577332   21011 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-799058"
	I0815 00:06:56.577336   21011 addons.go:69] Setting default-storageclass=true in profile "addons-799058"
	I0815 00:06:56.577344   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577350   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577362   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577370   21011 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-799058"
	I0815 00:06:56.577376   21011 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-799058"
	I0815 00:06:56.577394   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577674   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.577691   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.577304   21011 addons.go:234] Setting addon ingress-dns=true in "addons-799058"
	I0815 00:06:56.577746   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.577752   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.577758   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.577764   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.577769   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.577770   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577794   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.577795   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.577815   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.577326   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.577276   21011 addons.go:69] Setting helm-tiller=true in profile "addons-799058"
	I0815 00:06:56.577881   21011 addons.go:234] Setting addon helm-tiller=true in "addons-799058"
	I0815 00:06:56.577262   21011 addons.go:69] Setting gcp-auth=true in profile "addons-799058"
	I0815 00:06:56.577898   21011 mustload.go:65] Loading cluster: addons-799058
	I0815 00:06:56.577294   21011 addons.go:234] Setting addon storage-provisioner=true in "addons-799058"
	I0815 00:06:56.577236   21011 addons.go:234] Setting addon yakd=true in "addons-799058"
	I0815 00:06:56.577269   21011 addons.go:69] Setting volcano=true in profile "addons-799058"
	I0815 00:06:56.577926   21011 addons.go:234] Setting addon volcano=true in "addons-799058"
	I0815 00:06:56.577267   21011 addons.go:234] Setting addon inspektor-gadget=true in "addons-799058"
	I0815 00:06:56.577950   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.577962   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.577280   21011 addons.go:69] Setting ingress=true in profile "addons-799058"
	I0815 00:06:56.577262   21011 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-799058"
	I0815 00:06:56.578089   21011 addons.go:234] Setting addon ingress=true in "addons-799058"
	I0815 00:06:56.578126   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.578159   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.578188   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.578248   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.578283   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.578452   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.578472   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.578574   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.578595   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.578649   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.578656   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.578665   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.578768   21011 config.go:182] Loaded profile config "addons-799058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:06:56.578781   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.578883   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.578910   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.578981   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.579003   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.579067   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.579088   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.579099   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.579112   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.579122   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.579145   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.580107   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.580495   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.580521   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.581232   21011 out.go:177] * Verifying Kubernetes components...
	I0815 00:06:56.582691   21011 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:06:56.598247   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0815 00:06:56.598246   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36949
	I0815 00:06:56.598802   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.598909   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.599306   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.599324   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.599438   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.599456   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.599561   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38075
	I0815 00:06:56.599718   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.599903   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.599959   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.600048   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.600645   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.600674   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.600960   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.601029   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.601046   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.601936   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.601984   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.602888   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41113
	I0815 00:06:56.603453   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.603853   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.603873   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.604315   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0815 00:06:56.604469   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.604618   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.605183   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.605218   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.610411   21011 addons.go:234] Setting addon default-storageclass=true in "addons-799058"
	I0815 00:06:56.610451   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.610807   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.610840   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.616876   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I0815 00:06:56.617056   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.617071   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.617456   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.617640   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.618212   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.618231   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.618523   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.618559   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.618888   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.619466   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.619502   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.623694   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35035
	I0815 00:06:56.624153   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.624694   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.624712   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.625059   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.625643   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.625676   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.638808   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0815 00:06:56.640954   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0815 00:06:56.641454   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.642019   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.642036   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.642403   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.642588   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.642758   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I0815 00:06:56.643438   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.643918   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.643938   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.644271   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.644401   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.644840   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.645095   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.645306   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.645326   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.645629   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.645904   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.646609   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.647050   21011 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 00:06:56.647687   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.648209   21011 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 00:06:56.648265   21011 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 00:06:56.648285   21011 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 00:06:56.648329   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.649134   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 00:06:56.650550   21011 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 00:06:56.650565   21011 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 00:06:56.650582   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.650729   21011 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:06:56.650738   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 00:06:56.650752   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.651913   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.651948   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.651965   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.652148   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46303
	I0815 00:06:56.652310   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.652462   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.652702   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.653002   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42549
	I0815 00:06:56.653038   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.653099   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.653971   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.653999   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.654073   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.654682   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.654700   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.654795   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.655233   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.655581   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.655645   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.655850   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.655859   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.655887   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.656069   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.656405   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.656608   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.656789   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.656946   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.656991   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.657529   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.657558   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.657708   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.657897   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0815 00:06:56.657901   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.658616   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.658618   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I0815 00:06:56.658757   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.672552   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0815 00:06:56.672566   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0815 00:06:56.672619   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.672634   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I0815 00:06:56.672785   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.672797   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0815 00:06:56.672820   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38931
	I0815 00:06:56.672571   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.673404   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.673409   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.673529   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.673552   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.673567   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.673577   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.673589   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.673617   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.674375   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.674396   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.674537   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.674552   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.674598   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.674698   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.674712   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.674750   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.674842   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.674855   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.675047   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.675139   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.675202   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.675236   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.675256   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.675475   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.675602   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.676015   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.676056   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.676240   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.676272   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.676388   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 00:06:56.676483   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44475
	I0815 00:06:56.677049   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.677080   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.677123   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.677287   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I0815 00:06:56.677694   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.677708   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.677737   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.677799   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.678626   21011 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-799058"
	I0815 00:06:56.678669   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:06:56.678756   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.678776   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 00:06:56.678780   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.678971   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.678984   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.679003   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.679028   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.679338   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.679374   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.679570   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.679593   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.679612   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.679631   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.679975   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.680133   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.680218   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.680252   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.680284   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.680334   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.680522   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 00:06:56.681388   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 00:06:56.682467   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.683351   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 00:06:56.683410   21011 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 00:06:56.684381   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 00:06:56.684427   21011 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:06:56.684441   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 00:06:56.684457   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.686506   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0815 00:06:56.687431   21011 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 00:06:56.687918   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.688329   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 00:06:56.688345   21011 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 00:06:56.688539   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.688542   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.688561   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.688587   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.688733   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.688881   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.689061   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.691749   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.692142   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.692165   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.692510   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.692760   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.692931   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.693118   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.702500   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39761
	I0815 00:06:56.702936   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.703358   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.703372   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.703627   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.704017   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.704045   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.706460   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39015
	I0815 00:06:56.707252   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.707583   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36731
	I0815 00:06:56.708022   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.708328   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.708344   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.708445   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.708451   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.708789   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.708946   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.708982   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.709547   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:06:56.709585   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:06:56.709934   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32917
	I0815 00:06:56.710254   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.710678   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.710693   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.710940   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.710999   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.711475   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.713094   21011 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 00:06:56.714028   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I0815 00:06:56.714064   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0815 00:06:56.714223   21011 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 00:06:56.714236   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 00:06:56.714253   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.714422   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.715106   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.715127   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.715155   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.715678   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.715879   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.716026   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.716042   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.716785   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.716944   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.718608   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.718649   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.719050   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.719107   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.719122   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.719180   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.719307   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.719422   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.720189   21011 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 00:06:56.720555   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.721363   21011 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:06:56.721386   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 00:06:56.721402   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.722130   21011 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 00:06:56.723223   21011 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 00:06:56.723242   21011 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 00:06:56.723257   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.724763   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.725545   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.725551   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.725566   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.725703   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.725838   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.725936   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.727144   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.727617   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.727635   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.727804   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.728875   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44285
	I0815 00:06:56.728978   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.729122   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.729246   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.729698   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.730015   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.730027   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.731595   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37213
	I0815 00:06:56.731611   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.731866   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.732280   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.732788   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.732810   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.733151   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.733388   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.734350   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.734466   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36207
	I0815 00:06:56.735062   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.735409   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.735603   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.735617   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.735880   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.736035   21011 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0815 00:06:56.736067   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.737795   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.738126   21011 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0815 00:06:56.738629   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0815 00:06:56.738896   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.739148   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46527
	I0815 00:06:56.739188   21011 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 00:06:56.739236   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.739248   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.739290   21011 out.go:177]   - Using image docker.io/busybox:stable
	I0815 00:06:56.739418   21011 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0815 00:06:56.739428   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0815 00:06:56.739439   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.739909   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.740100   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.740451   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.740830   21011 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 00:06:56.740843   21011 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 00:06:56.740881   21011 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:06:56.740881   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.740889   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 00:06:56.740963   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.741797   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.741542   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.742519   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.742611   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:06:56.742622   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:06:56.742764   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:06:56.742777   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:06:56.742785   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:06:56.742792   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:06:56.742995   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:06:56.743050   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:06:56.743062   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	W0815 00:06:56.743147   21011 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0815 00:06:56.743533   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.743837   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.744972   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.745445   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.745465   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.745491   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.745789   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.745803   21011 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 00:06:56.745811   21011 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 00:06:56.745821   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.745792   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.745850   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.745873   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.745975   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.746040   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.746341   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.746339   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.746735   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.746789   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.747030   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.748178   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.748443   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.748620   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.748709   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.748850   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.748869   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.748903   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.749075   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.749076   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.749240   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.749268   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.749329   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.749359   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.749434   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	W0815 00:06:56.752723   21011 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56348->192.168.39.195:22: read: connection reset by peer
	I0815 00:06:56.752745   21011 retry.go:31] will retry after 267.092203ms: ssh: handshake failed: read tcp 192.168.39.1:56348->192.168.39.195:22: read: connection reset by peer
	I0815 00:06:56.754715   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40927
	I0815 00:06:56.755055   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.755479   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.755499   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.755743   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.755899   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.756753   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0815 00:06:56.757126   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:06:56.757379   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.757553   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:06:56.757570   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:06:56.757985   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:06:56.758199   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:06:56.758823   21011 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 00:06:56.759522   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:06:56.760798   21011 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 00:06:56.760804   21011 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:06:56.761788   21011 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:06:56.761794   21011 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 00:06:56.761809   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 00:06:56.761822   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.762835   21011 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 00:06:56.764255   21011 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:06:56.764307   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 00:06:56.764355   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:06:56.765186   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.765563   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.765586   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.765782   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.766060   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.766191   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.766306   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:56.766882   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.767211   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:06:56.767230   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:06:56.767344   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:06:56.767506   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:06:56.767650   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:06:56.767750   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:06:57.105630   21011 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 00:06:57.105653   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 00:06:57.137778   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 00:06:57.172641   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 00:06:57.172675   21011 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 00:06:57.175519   21011 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 00:06:57.175542   21011 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 00:06:57.182410   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:06:57.196563   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 00:06:57.198261   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 00:06:57.213733   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 00:06:57.254905   21011 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 00:06:57.254935   21011 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 00:06:57.262572   21011 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 00:06:57.262594   21011 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 00:06:57.264351   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 00:06:57.322930   21011 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 00:06:57.322959   21011 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 00:06:57.323258   21011 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0815 00:06:57.323271   21011 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0815 00:06:57.326631   21011 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 00:06:57.326650   21011 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 00:06:57.375149   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 00:06:57.375171   21011 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 00:06:57.402693   21011 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 00:06:57.402721   21011 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 00:06:57.418190   21011 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:06:57.418227   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 00:06:57.458709   21011 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:06:57.458735   21011 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 00:06:57.484059   21011 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 00:06:57.484080   21011 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 00:06:57.533052   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 00:06:57.546691   21011 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:06:57.546718   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 00:06:57.549539   21011 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 00:06:57.549555   21011 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 00:06:57.587081   21011 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 00:06:57.587107   21011 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0815 00:06:57.622802   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 00:06:57.622824   21011 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 00:06:57.625074   21011 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 00:06:57.625087   21011 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 00:06:57.678934   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 00:06:57.740199   21011 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 00:06:57.740224   21011 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 00:06:57.780477   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 00:06:57.781743   21011 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 00:06:57.781762   21011 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 00:06:57.792016   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 00:06:57.792038   21011 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 00:06:57.796864   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 00:06:57.840190   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 00:06:57.840222   21011 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 00:06:57.904137   21011 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:06:57.904159   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 00:06:58.001070   21011 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 00:06:58.001107   21011 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 00:06:58.005124   21011 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 00:06:58.005142   21011 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 00:06:58.024988   21011 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:06:58.025005   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 00:06:58.149055   21011 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 00:06:58.149084   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 00:06:58.160097   21011 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 00:06:58.160118   21011 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 00:06:58.163844   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 00:06:58.192811   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:06:58.426506   21011 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 00:06:58.426537   21011 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 00:06:58.462891   21011 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 00:06:58.462914   21011 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 00:06:58.587145   21011 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:06:58.587166   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 00:06:58.764676   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 00:06:58.836674   21011 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 00:06:58.836696   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 00:06:59.229260   21011 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 00:06:59.229280   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 00:06:59.381097   21011 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:06:59.381128   21011 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 00:06:59.639603   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 00:07:00.016940   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.879122854s)
	I0815 00:07:00.016994   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:00.017006   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:00.017378   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:00.017385   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:00.017413   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:00.017430   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:00.017440   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:00.017691   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:00.017710   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:01.140080   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.957634891s)
	I0815 00:07:01.140136   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:01.140147   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:01.140436   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:01.140454   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:01.140472   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:01.140480   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:01.140731   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:01.140744   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:03.758192   21011 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 00:07:03.758234   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:07:03.761408   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:07:03.761894   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:07:03.761928   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:07:03.762138   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:07:03.762381   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:07:03.762547   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:07:03.762694   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:07:04.112114   21011 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 00:07:04.201551   21011 addons.go:234] Setting addon gcp-auth=true in "addons-799058"
	I0815 00:07:04.201596   21011 host.go:66] Checking if "addons-799058" exists ...
	I0815 00:07:04.201916   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:07:04.201941   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:07:04.218067   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I0815 00:07:04.218497   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:07:04.218948   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:07:04.218969   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:07:04.219246   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:07:04.219676   21011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:07:04.219701   21011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:07:04.234596   21011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0815 00:07:04.235061   21011 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:07:04.235547   21011 main.go:141] libmachine: Using API Version  1
	I0815 00:07:04.235572   21011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:07:04.235884   21011 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:07:04.236065   21011 main.go:141] libmachine: (addons-799058) Calling .GetState
	I0815 00:07:04.237688   21011 main.go:141] libmachine: (addons-799058) Calling .DriverName
	I0815 00:07:04.237914   21011 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 00:07:04.237935   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHHostname
	I0815 00:07:04.240721   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:07:04.241083   21011 main.go:141] libmachine: (addons-799058) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:8d:47", ip: ""} in network mk-addons-799058: {Iface:virbr1 ExpiryTime:2024-08-15 01:06:25 +0000 UTC Type:0 Mac:52:54:00:e5:8d:47 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-799058 Clientid:01:52:54:00:e5:8d:47}
	I0815 00:07:04.241110   21011 main.go:141] libmachine: (addons-799058) DBG | domain addons-799058 has defined IP address 192.168.39.195 and MAC address 52:54:00:e5:8d:47 in network mk-addons-799058
	I0815 00:07:04.241269   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHPort
	I0815 00:07:04.241458   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHKeyPath
	I0815 00:07:04.241620   21011 main.go:141] libmachine: (addons-799058) Calling .GetSSHUsername
	I0815 00:07:04.241738   21011 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/addons-799058/id_rsa Username:docker}
	I0815 00:07:04.563901   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.367299971s)
	I0815 00:07:04.563950   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.563962   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.563964   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.365674876s)
	I0815 00:07:04.563996   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564013   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564110   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.350354461s)
	I0815 00:07:04.564138   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564148   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564157   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.299780618s)
	I0815 00:07:04.564178   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564193   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564210   21011 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.145991516s)
	I0815 00:07:04.564230   21011 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.145980709s)
	I0815 00:07:04.564244   21011 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 00:07:04.564255   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.031177286s)
	I0815 00:07:04.564271   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564279   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564391   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.885427003s)
	I0815 00:07:04.564426   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564440   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564523   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.564524   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.784023893s)
	I0815 00:07:04.564551   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564562   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564589   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.767700415s)
	I0815 00:07:04.564609   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564618   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564640   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.564665   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.564676   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564683   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564720   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.400843776s)
	I0815 00:07:04.564738   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564748   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.564872   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.372029944s)
	W0815 00:07:04.564896   21011 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:07:04.564919   21011 retry.go:31] will retry after 221.047494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 00:07:04.564972   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.564983   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.564992   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.564992   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.564995   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.800292192s)
	I0815 00:07:04.565014   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.565020   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.565025   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.565029   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.565030   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.565037   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.565044   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.565048   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.564998   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.565066   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.565074   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.565081   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.565087   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.565090   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.565101   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.565109   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.565113   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.565116   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.565124   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.565133   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.565140   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.565142   21011 node_ready.go:35] waiting up to 6m0s for node "addons-799058" to be "Ready" ...
	I0815 00:07:04.565088   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.565240   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.565250   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.565429   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.565583   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.565628   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.567260   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.567287   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.567301   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.567310   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.567315   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.567318   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.567323   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.567349   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.567466   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.567482   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.567502   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.567509   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.567516   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.567523   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.567575   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.567582   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.567589   21011 addons.go:475] Verifying addon registry=true in "addons-799058"
	I0815 00:07:04.568318   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.568362   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.568370   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.568378   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.568384   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.568764   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.568773   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.568786   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.568799   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.568807   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.568860   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.568882   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.568889   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.568896   21011 addons.go:475] Verifying addon metrics-server=true in "addons-799058"
	I0815 00:07:04.568924   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.568945   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.568952   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.568960   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.568967   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.569017   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.569037   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.569046   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.569053   21011 addons.go:475] Verifying addon ingress=true in "addons-799058"
	I0815 00:07:04.569457   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.569468   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.569416   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.570745   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.570753   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.570765   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.570784   21011 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-799058 service yakd-dashboard -n yakd-dashboard
	
	I0815 00:07:04.570787   21011 out.go:177] * Verifying ingress addon...
	I0815 00:07:04.570873   21011 out.go:177] * Verifying registry addon...
	I0815 00:07:04.572792   21011 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0815 00:07:04.573108   21011 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 00:07:04.578827   21011 node_ready.go:49] node "addons-799058" has status "Ready":"True"
	I0815 00:07:04.578851   21011 node_ready.go:38] duration metric: took 13.693608ms for node "addons-799058" to be "Ready" ...
	I0815 00:07:04.578869   21011 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:07:04.593983   21011 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 00:07:04.594004   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:04.604456   21011 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 00:07:04.604472   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:04.608975   21011 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-52frj" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:04.666429   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.666453   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.666675   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:04.666720   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:04.666783   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.666796   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:04.666804   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.666807   21011 pod_ready.go:92] pod "coredns-6f6b679f8f-52frj" in "kube-system" namespace has status "Ready":"True"
	I0815 00:07:04.666820   21011 pod_ready.go:81] duration metric: took 57.815829ms for pod "coredns-6f6b679f8f-52frj" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:04.666831   21011 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace to be "Ready" ...
	W0815 00:07:04.666876   21011 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I0815 00:07:04.667015   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:04.667030   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:04.786834   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 00:07:05.068636   21011 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-799058" context rescaled to 1 replicas
	I0815 00:07:05.077998   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:05.078334   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:05.578471   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:05.580581   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:06.082607   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:06.085649   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:06.584587   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:06.584943   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:06.696965   21011 pod_ready.go:102] pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:06.956373   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.316729747s)
	I0815 00:07:06.956425   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:06.956440   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:06.956455   21011 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.718516803s)
	I0815 00:07:06.956667   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.16977813s)
	I0815 00:07:06.956709   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:06.956709   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:06.956728   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:06.956750   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:06.956760   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:06.956778   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:06.956790   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:06.957016   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:06.957040   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:06.957051   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:06.957064   21011 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-799058"
	I0815 00:07:06.957101   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:06.957128   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:06.957183   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:06.957196   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:06.957204   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:06.957452   21011 main.go:141] libmachine: (addons-799058) DBG | Closing plugin on server side
	I0815 00:07:06.957487   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:06.957500   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:06.958288   21011 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 00:07:06.958331   21011 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 00:07:06.959594   21011 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 00:07:06.960471   21011 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 00:07:06.960989   21011 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 00:07:06.961016   21011 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 00:07:06.973726   21011 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 00:07:06.973756   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:07.036274   21011 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 00:07:07.036296   21011 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 00:07:07.077934   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:07.078059   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:07.082822   21011 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:07:07.082844   21011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 00:07:07.145438   21011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 00:07:07.465709   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:07.576877   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:07.578957   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:08.012124   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:08.090989   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:08.091369   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:08.271985   21011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.126506881s)
	I0815 00:07:08.272052   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:08.272072   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:08.272347   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:08.272365   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:08.272375   21011 main.go:141] libmachine: Making call to close driver server
	I0815 00:07:08.272383   21011 main.go:141] libmachine: (addons-799058) Calling .Close
	I0815 00:07:08.272678   21011 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:07:08.272699   21011 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:07:08.274332   21011 addons.go:475] Verifying addon gcp-auth=true in "addons-799058"
	I0815 00:07:08.276249   21011 out.go:177] * Verifying gcp-auth addon...
	I0815 00:07:08.278294   21011 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 00:07:08.284564   21011 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 00:07:08.284579   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:08.464863   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:08.578366   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:08.578819   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:08.781823   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:08.964475   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:09.077993   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:09.078435   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:09.172142   21011 pod_ready.go:102] pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:09.281161   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:09.467303   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:09.577323   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:09.577476   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:09.782470   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:09.965205   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:10.077273   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:10.077899   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:10.282572   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:10.464241   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:10.577319   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:10.578290   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:10.782531   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:10.966305   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:11.076802   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:11.077417   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:11.173203   21011 pod_ready.go:102] pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:11.282042   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:11.464927   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:11.761039   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:11.761178   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:11.780927   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:11.964918   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:12.077795   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:12.080410   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:12.187628   21011 pod_ready.go:97] pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:07:12 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.195 HostIPs:[{IP:192.168.39
.195}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-15 00:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-15 00:07:00 +0000 UTC,FinishedAt:2024-08-15 00:07:09 +0000 UTC,ContainerID:cri-o://9f2106fb88c31e9899f1097bb47ec8d72e55ded06cfb6301d4466e2060bd8e73,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://9f2106fb88c31e9899f1097bb47ec8d72e55ded06cfb6301d4466e2060bd8e73 Started:0xc002140910 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001fb6d60} {Name:kube-api-access-b29kk MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001fb6d70}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0815 00:07:12.187666   21011 pod_ready.go:81] duration metric: took 7.52082575s for pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace to be "Ready" ...
	E0815 00:07:12.187682   21011 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-hjn98" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:07:12 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 00:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.195 HostIPs:[{IP:192.168.39.195}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-15 00:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-15 00:07:00 +0000 UTC,FinishedAt:2024-08-15 00:07:09 +0000 UTC,ContainerID:cri-o://9f2106fb88c31e9899f1097bb47ec8d72e55ded06cfb6301d4466e2060bd8e73,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://9f2106fb88c31e9899f1097bb47ec8d72e55ded06cfb6301d4466e2060bd8e73 Started:0xc002140910 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001fb6d60} {Name:kube-api-access-b29kk MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001fb6d70}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0815 00:07:12.187696   21011 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.199518   21011 pod_ready.go:92] pod "etcd-addons-799058" in "kube-system" namespace has status "Ready":"True"
	I0815 00:07:12.199548   21011 pod_ready.go:81] duration metric: took 11.843509ms for pod "etcd-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.199576   21011 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.204088   21011 pod_ready.go:92] pod "kube-apiserver-addons-799058" in "kube-system" namespace has status "Ready":"True"
	I0815 00:07:12.204111   21011 pod_ready.go:81] duration metric: took 4.52618ms for pod "kube-apiserver-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.204123   21011 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.210510   21011 pod_ready.go:92] pod "kube-controller-manager-addons-799058" in "kube-system" namespace has status "Ready":"True"
	I0815 00:07:12.210535   21011 pod_ready.go:81] duration metric: took 6.403283ms for pod "kube-controller-manager-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.210550   21011 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w8m2t" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.218659   21011 pod_ready.go:92] pod "kube-proxy-w8m2t" in "kube-system" namespace has status "Ready":"True"
	I0815 00:07:12.218675   21011 pod_ready.go:81] duration metric: took 8.118325ms for pod "kube-proxy-w8m2t" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.218684   21011 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.284398   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:12.466056   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:12.572279   21011 pod_ready.go:92] pod "kube-scheduler-addons-799058" in "kube-system" namespace has status "Ready":"True"
	I0815 00:07:12.572308   21011 pod_ready.go:81] duration metric: took 353.617402ms for pod "kube-scheduler-addons-799058" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.572320   21011 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace to be "Ready" ...
	I0815 00:07:12.578294   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:12.579889   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:12.781118   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:12.964724   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:13.080686   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:13.080908   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:13.281752   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:13.465810   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:13.577277   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:13.580219   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:13.781840   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:13.964740   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:14.077553   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:14.077917   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:14.287443   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:14.465075   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:14.577494   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:14.578193   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:14.579052   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:14.782418   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:14.967163   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:15.076622   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:15.077435   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:15.282241   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:15.465674   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:15.577035   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:15.577451   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:15.781428   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:15.965560   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:16.077455   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:16.078629   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:16.282054   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:16.465201   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:16.579651   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:16.580478   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:16.585881   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:16.782687   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:16.965601   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:17.077008   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:17.077658   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:17.282043   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:17.464856   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:17.578176   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:17.578881   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:17.781383   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:17.965066   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:18.078711   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:18.078999   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:18.282439   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:18.465629   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:18.578749   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:18.578931   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:18.782108   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:18.965042   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:19.078927   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:19.079530   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:19.081573   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:19.281693   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:19.464189   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:19.578401   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:19.578944   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:19.781449   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:19.965090   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:20.077941   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:20.078467   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:20.282563   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:20.464947   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:20.578343   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:20.578682   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:20.782414   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:20.965577   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:21.088339   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:21.089125   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:21.090959   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:21.282201   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:21.465629   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:21.577646   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:21.577730   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:21.781991   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:21.970599   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:22.079439   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:22.080641   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:22.282882   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:22.465087   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:22.576920   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:22.577893   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:22.781852   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:22.964965   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:23.078430   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:23.078668   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:23.282976   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:23.464585   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:23.577843   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:23.579195   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:23.580370   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:23.782176   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:23.965183   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:24.078738   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:24.079342   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:24.282030   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:24.466050   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:24.577166   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:24.577358   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:24.783316   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:24.965922   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:25.077907   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:25.080509   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:25.282655   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:25.465560   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:25.577512   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:25.578603   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:25.782527   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:25.965606   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:26.077474   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:26.078429   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:26.078843   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:26.282004   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:26.464748   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:26.577056   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:26.577774   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:26.781308   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:26.967663   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:27.077015   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:27.077414   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:27.282599   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:27.465476   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:27.578586   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:27.578722   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:27.782746   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:27.964548   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:28.077529   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:28.081271   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:28.082036   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:28.297322   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:28.752222   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:28.752336   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:28.752890   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:28.781339   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:28.964922   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:29.077941   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:29.079915   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:29.281949   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:29.464111   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:29.577879   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:29.579272   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:29.782060   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:29.965169   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:30.078147   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:30.078742   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:30.286424   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:30.465594   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:30.576127   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:30.577632   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:30.578942   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:30.782156   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:30.965636   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:31.076991   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:31.077657   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:31.282259   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:31.464752   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:31.577895   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:31.578692   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:31.782098   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:31.964489   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:32.078105   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:32.078212   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:32.281761   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:32.464719   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:32.579378   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:32.579957   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:32.582510   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:32.781431   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:32.966332   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:33.078137   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:33.078364   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:33.281276   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:33.465417   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:33.579620   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:33.579923   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:33.782517   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:33.965315   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:34.077878   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:34.078045   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:34.282377   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:34.466194   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:34.577626   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:34.578400   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:34.781993   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:34.966720   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:35.078423   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:35.079964   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:35.081208   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:35.281144   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:35.465396   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:35.577316   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:35.579795   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:35.781305   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:35.965208   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:36.077339   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:36.077771   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:36.281728   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:36.464904   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:36.578334   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:36.580004   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:36.781727   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:36.964482   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:37.076482   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:37.077822   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:37.282117   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:37.465395   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:37.577802   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:37.578125   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:37.578974   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:37.782004   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:37.964503   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:38.076759   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:38.077757   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:38.281814   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:38.465458   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:38.579113   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:38.579353   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:38.781406   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:38.965210   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:39.077482   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:39.078218   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:39.281959   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:39.464787   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:39.578740   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:39.578987   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:39.580355   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:39.782687   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:39.965497   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:40.078136   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:40.079489   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:40.281892   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:40.465507   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:40.577874   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:40.578161   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:40.783640   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:40.965976   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:41.078236   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:41.078469   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:41.282033   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:41.465853   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:41.576603   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:41.577165   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:41.781958   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:41.965148   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:42.076887   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:42.077628   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:42.078429   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:42.303850   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:42.469650   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:42.576633   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:42.576830   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:42.782810   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:42.965264   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:43.081196   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:43.081756   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:43.281503   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:43.466767   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:43.576735   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:43.578177   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:43.782040   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:43.964882   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:44.077517   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:44.078635   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:44.081699   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:44.281415   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:44.467905   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:44.577393   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:44.580939   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:44.781904   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:44.965305   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:45.076905   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:45.078010   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:45.282268   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:45.465952   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:45.599209   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:45.599821   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:45.782056   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:45.965356   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:46.077107   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:46.079423   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:46.281566   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:46.465856   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:46.578399   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:46.580416   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:46.582135   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:46.782298   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:46.966448   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:47.078839   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:47.079596   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 00:07:47.282685   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:47.467319   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:47.578015   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:47.579136   21011 kapi.go:107] duration metric: took 43.006026082s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 00:07:47.781731   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:47.965236   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:48.078462   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:48.282735   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:48.465088   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:48.577479   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:48.782556   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:48.965639   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:49.077751   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:49.081509   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:49.281349   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:49.465440   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:49.579165   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:49.781996   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:49.964735   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:50.076771   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:50.281399   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:50.466809   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:50.579059   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:50.782602   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:50.965557   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:51.076023   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:51.281023   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:51.465571   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:51.577081   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:51.579048   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:51.781747   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:51.965014   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:52.079202   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:52.286969   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:52.464491   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:52.577434   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:52.783079   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:52.964775   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:53.078020   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:53.281805   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:53.464744   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:53.578419   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:53.582925   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:53.782038   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:53.965526   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:54.076839   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:54.282099   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:54.465136   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:54.576998   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:54.783186   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:54.964812   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:55.078932   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:55.282536   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:55.464781   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:55.577834   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:55.781833   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:55.965016   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:56.079337   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:56.080960   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:56.281720   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:56.464740   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:56.578407   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:56.781864   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:56.964513   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:57.077352   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:57.283896   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:57.477855   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:57.582702   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:57.783238   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:57.965639   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:58.077669   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:58.282127   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:58.465722   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:58.576688   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:58.578905   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:07:58.781650   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:58.965262   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:59.078138   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:59.281496   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:59.465453   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:07:59.578234   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:07:59.782145   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:07:59.968913   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:00.077558   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:00.282004   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:00.465356   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:00.577364   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:00.579445   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:00.783653   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:00.964905   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:01.077923   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:01.281364   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:01.465080   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:01.578056   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:01.783226   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:01.965046   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:02.076770   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:02.283101   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:02.465031   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:02.578053   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:02.783032   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:02.968065   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:03.078845   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:03.080884   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:03.281944   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:03.464956   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:03.577241   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:03.781455   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:03.967084   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:04.077884   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:04.282840   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:04.465168   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:04.578045   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:04.783040   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:04.964612   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:05.076973   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:05.282319   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:05.464763   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:05.578363   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:05.580011   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:05.782557   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:05.966250   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:06.080303   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:06.287184   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:06.465373   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:06.579029   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:06.784185   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:06.965959   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:07.080912   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:07.281766   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:07.464739   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:07.586860   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:07.591870   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:07.783190   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:07.965139   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:08.077052   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:08.281640   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:08.464590   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:08.577741   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:08.782460   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:08.966645   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:09.077359   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:09.282509   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:09.465851   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:09.580560   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:09.781739   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:09.965556   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:10.077612   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:10.079023   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:10.292762   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:10.475496   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:10.584389   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:10.782010   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:11.347160   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:11.347735   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:11.347996   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:11.465652   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:11.577170   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:11.782257   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:11.966886   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:12.077638   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:12.081024   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:12.281714   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:12.465184   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:12.580287   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:12.782526   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:12.965541   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:13.080560   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:13.285347   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:13.465945   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:13.577452   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:13.781599   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:13.966266   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:14.077408   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:14.283053   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:14.464930   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:14.784360   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:14.785095   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:14.787900   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:14.966309   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:15.077043   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:15.281758   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:15.466208   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:15.577302   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:15.782098   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:15.964869   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:16.076948   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:16.281775   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:16.464856   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:16.584838   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:16.784397   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:16.968065   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:17.085864   21011 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 00:08:17.087631   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:17.283455   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:17.465440   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:17.843373   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:17.844278   21011 kapi.go:107] duration metric: took 1m13.27148182s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 00:08:18.026509   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:18.322084   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:18.465116   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:18.782207   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:18.965359   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:19.282077   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:19.465656   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:19.579262   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:19.782089   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:19.967413   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:20.282518   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:20.465660   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:20.781599   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:20.965374   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:21.282249   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:21.465839   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:21.783458   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 00:08:21.965929   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:22.078802   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:22.285986   21011 kapi.go:107] duration metric: took 1m14.007689781s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 00:08:22.287361   21011 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-799058 cluster.
	I0815 00:08:22.288537   21011 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 00:08:22.289520   21011 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 00:08:22.466909   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:22.968802   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:23.465535   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:23.965573   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:24.479262   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:24.579533   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:24.965855   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:25.467799   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:25.965315   21011 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 00:08:26.467500   21011 kapi.go:107] duration metric: took 1m19.507027779s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 00:08:26.469214   21011 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, nvidia-device-plugin, inspektor-gadget, metrics-server, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0815 00:08:26.470265   21011 addons.go:510] duration metric: took 1m29.893146816s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns nvidia-device-plugin inspektor-gadget metrics-server helm-tiller yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0815 00:08:26.579622   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:28.593045   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:31.078099   21011 pod_ready.go:102] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"False"
	I0815 00:08:32.577596   21011 pod_ready.go:92] pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace has status "Ready":"True"
	I0815 00:08:32.577616   21011 pod_ready.go:81] duration metric: took 1m20.005288968s for pod "metrics-server-8988944d9-q4bwq" in "kube-system" namespace to be "Ready" ...
	I0815 00:08:32.577636   21011 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4jqvz" in "kube-system" namespace to be "Ready" ...
	I0815 00:08:32.581788   21011 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-4jqvz" in "kube-system" namespace has status "Ready":"True"
	I0815 00:08:32.581804   21011 pod_ready.go:81] duration metric: took 4.162913ms for pod "nvidia-device-plugin-daemonset-4jqvz" in "kube-system" namespace to be "Ready" ...
	I0815 00:08:32.581822   21011 pod_ready.go:38] duration metric: took 1m28.00293987s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:08:32.581838   21011 api_server.go:52] waiting for apiserver process to appear ...
	I0815 00:08:32.581879   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:08:32.581925   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:08:32.624183   21011 cri.go:89] found id: "fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f"
	I0815 00:08:32.624203   21011 cri.go:89] found id: ""
	I0815 00:08:32.624211   21011 logs.go:276] 1 containers: [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f]
	I0815 00:08:32.624255   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:32.628117   21011 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:08:32.628164   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:08:32.664258   21011 cri.go:89] found id: "976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45"
	I0815 00:08:32.664281   21011 cri.go:89] found id: ""
	I0815 00:08:32.664294   21011 logs.go:276] 1 containers: [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45]
	I0815 00:08:32.664350   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:32.668249   21011 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:08:32.668340   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:08:32.702483   21011 cri.go:89] found id: "b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc"
	I0815 00:08:32.702506   21011 cri.go:89] found id: ""
	I0815 00:08:32.702515   21011 logs.go:276] 1 containers: [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc]
	I0815 00:08:32.702572   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:32.706214   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:08:32.706265   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:08:32.748939   21011 cri.go:89] found id: "807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559"
	I0815 00:08:32.748962   21011 cri.go:89] found id: ""
	I0815 00:08:32.748971   21011 logs.go:276] 1 containers: [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559]
	I0815 00:08:32.749019   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:32.752759   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:08:32.752805   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:08:32.787184   21011 cri.go:89] found id: "1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31"
	I0815 00:08:32.787205   21011 cri.go:89] found id: ""
	I0815 00:08:32.787215   21011 logs.go:276] 1 containers: [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31]
	I0815 00:08:32.787267   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:32.790875   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:08:32.790936   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:08:32.825094   21011 cri.go:89] found id: "169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2"
	I0815 00:08:32.825111   21011 cri.go:89] found id: ""
	I0815 00:08:32.825119   21011 logs.go:276] 1 containers: [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2]
	I0815 00:08:32.825169   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:32.829165   21011 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:08:32.829214   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:08:32.863721   21011 cri.go:89] found id: ""
	I0815 00:08:32.863747   21011 logs.go:276] 0 containers: []
	W0815 00:08:32.863759   21011 logs.go:278] No container was found matching "kindnet"
	I0815 00:08:32.863768   21011 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:08:32.863779   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:08:32.998484   21011 logs.go:123] Gathering logs for kube-apiserver [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f] ...
	I0815 00:08:32.998508   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f"
	I0815 00:08:33.044876   21011 logs.go:123] Gathering logs for etcd [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45] ...
	I0815 00:08:33.044903   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45"
	I0815 00:08:33.100460   21011 logs.go:123] Gathering logs for container status ...
	I0815 00:08:33.100488   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:08:33.154205   21011 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:08:33.154247   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:08:33.918618   21011 logs.go:123] Gathering logs for kubelet ...
	I0815 00:08:33.918663   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 00:08:34.008157   21011 logs.go:123] Gathering logs for dmesg ...
	I0815 00:08:34.008193   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:08:34.022331   21011 logs.go:123] Gathering logs for coredns [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc] ...
	I0815 00:08:34.022361   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc"
	I0815 00:08:34.055805   21011 logs.go:123] Gathering logs for kube-scheduler [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559] ...
	I0815 00:08:34.055836   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559"
	I0815 00:08:34.101793   21011 logs.go:123] Gathering logs for kube-proxy [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31] ...
	I0815 00:08:34.101818   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31"
	I0815 00:08:34.137768   21011 logs.go:123] Gathering logs for kube-controller-manager [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2] ...
	I0815 00:08:34.137790   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2"
	I0815 00:08:36.697527   21011 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:08:36.717467   21011 api_server.go:72] duration metric: took 1m40.140368673s to wait for apiserver process to appear ...
	I0815 00:08:36.717486   21011 api_server.go:88] waiting for apiserver healthz status ...
	I0815 00:08:36.717515   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:08:36.717559   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:08:36.762131   21011 cri.go:89] found id: "fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f"
	I0815 00:08:36.762157   21011 cri.go:89] found id: ""
	I0815 00:08:36.762167   21011 logs.go:276] 1 containers: [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f]
	I0815 00:08:36.762212   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:36.765968   21011 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:08:36.766024   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:08:36.800497   21011 cri.go:89] found id: "976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45"
	I0815 00:08:36.800520   21011 cri.go:89] found id: ""
	I0815 00:08:36.800530   21011 logs.go:276] 1 containers: [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45]
	I0815 00:08:36.800584   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:36.804297   21011 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:08:36.804359   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:08:36.838394   21011 cri.go:89] found id: "b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc"
	I0815 00:08:36.838411   21011 cri.go:89] found id: ""
	I0815 00:08:36.838418   21011 logs.go:276] 1 containers: [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc]
	I0815 00:08:36.838467   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:36.842170   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:08:36.842219   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:08:36.887217   21011 cri.go:89] found id: "807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559"
	I0815 00:08:36.887244   21011 cri.go:89] found id: ""
	I0815 00:08:36.887254   21011 logs.go:276] 1 containers: [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559]
	I0815 00:08:36.887306   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:36.892331   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:08:36.892398   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:08:36.933664   21011 cri.go:89] found id: "1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31"
	I0815 00:08:36.933682   21011 cri.go:89] found id: ""
	I0815 00:08:36.933690   21011 logs.go:276] 1 containers: [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31]
	I0815 00:08:36.933734   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:36.938120   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:08:36.938186   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:08:36.977242   21011 cri.go:89] found id: "169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2"
	I0815 00:08:36.977269   21011 cri.go:89] found id: ""
	I0815 00:08:36.977279   21011 logs.go:276] 1 containers: [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2]
	I0815 00:08:36.977342   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:36.981262   21011 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:08:36.981323   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:08:37.016043   21011 cri.go:89] found id: ""
	I0815 00:08:37.016069   21011 logs.go:276] 0 containers: []
	W0815 00:08:37.016077   21011 logs.go:278] No container was found matching "kindnet"
	I0815 00:08:37.016087   21011 logs.go:123] Gathering logs for kube-proxy [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31] ...
	I0815 00:08:37.016102   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31"
	I0815 00:08:37.052982   21011 logs.go:123] Gathering logs for etcd [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45] ...
	I0815 00:08:37.053007   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45"
	I0815 00:08:37.098916   21011 logs.go:123] Gathering logs for kube-scheduler [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559] ...
	I0815 00:08:37.098947   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559"
	I0815 00:08:37.143999   21011 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:08:37.144028   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:08:37.260585   21011 logs.go:123] Gathering logs for kube-apiserver [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f] ...
	I0815 00:08:37.260612   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f"
	I0815 00:08:37.312434   21011 logs.go:123] Gathering logs for coredns [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc] ...
	I0815 00:08:37.312461   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc"
	I0815 00:08:37.349486   21011 logs.go:123] Gathering logs for kube-controller-manager [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2] ...
	I0815 00:08:37.349525   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2"
	I0815 00:08:37.414647   21011 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:08:37.414680   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:08:38.264837   21011 logs.go:123] Gathering logs for container status ...
	I0815 00:08:38.264885   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:08:38.310470   21011 logs.go:123] Gathering logs for kubelet ...
	I0815 00:08:38.310500   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 00:08:38.390281   21011 logs.go:123] Gathering logs for dmesg ...
	I0815 00:08:38.390315   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:08:40.908588   21011 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0815 00:08:40.913318   21011 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0815 00:08:40.914267   21011 api_server.go:141] control plane version: v1.31.0
	I0815 00:08:40.914289   21011 api_server.go:131] duration metric: took 4.19679749s to wait for apiserver health ...
	I0815 00:08:40.914297   21011 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 00:08:40.914315   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 00:08:40.914364   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 00:08:40.955707   21011 cri.go:89] found id: "fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f"
	I0815 00:08:40.955728   21011 cri.go:89] found id: ""
	I0815 00:08:40.955735   21011 logs.go:276] 1 containers: [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f]
	I0815 00:08:40.955780   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:40.959499   21011 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 00:08:40.959562   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 00:08:41.001498   21011 cri.go:89] found id: "976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45"
	I0815 00:08:41.001517   21011 cri.go:89] found id: ""
	I0815 00:08:41.001524   21011 logs.go:276] 1 containers: [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45]
	I0815 00:08:41.001569   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:41.005598   21011 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 00:08:41.005638   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 00:08:41.044195   21011 cri.go:89] found id: "b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc"
	I0815 00:08:41.044216   21011 cri.go:89] found id: ""
	I0815 00:08:41.044226   21011 logs.go:276] 1 containers: [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc]
	I0815 00:08:41.044282   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:41.048045   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 00:08:41.048091   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 00:08:41.088199   21011 cri.go:89] found id: "807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559"
	I0815 00:08:41.088215   21011 cri.go:89] found id: ""
	I0815 00:08:41.088221   21011 logs.go:276] 1 containers: [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559]
	I0815 00:08:41.088268   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:41.092123   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 00:08:41.092169   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 00:08:41.133571   21011 cri.go:89] found id: "1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31"
	I0815 00:08:41.133596   21011 cri.go:89] found id: ""
	I0815 00:08:41.133605   21011 logs.go:276] 1 containers: [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31]
	I0815 00:08:41.133662   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:41.137878   21011 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 00:08:41.137945   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 00:08:41.184888   21011 cri.go:89] found id: "169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2"
	I0815 00:08:41.184911   21011 cri.go:89] found id: ""
	I0815 00:08:41.184920   21011 logs.go:276] 1 containers: [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2]
	I0815 00:08:41.184980   21011 ssh_runner.go:195] Run: which crictl
	I0815 00:08:41.189258   21011 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 00:08:41.189314   21011 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 00:08:41.225843   21011 cri.go:89] found id: ""
	I0815 00:08:41.225867   21011 logs.go:276] 0 containers: []
	W0815 00:08:41.225875   21011 logs.go:278] No container was found matching "kindnet"
	I0815 00:08:41.225883   21011 logs.go:123] Gathering logs for etcd [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45] ...
	I0815 00:08:41.225893   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45"
	I0815 00:08:41.295073   21011 logs.go:123] Gathering logs for coredns [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc] ...
	I0815 00:08:41.295103   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc"
	I0815 00:08:41.337195   21011 logs.go:123] Gathering logs for kube-scheduler [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559] ...
	I0815 00:08:41.337221   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559"
	I0815 00:08:41.378765   21011 logs.go:123] Gathering logs for kube-proxy [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31] ...
	I0815 00:08:41.378792   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31"
	I0815 00:08:41.415889   21011 logs.go:123] Gathering logs for CRI-O ...
	I0815 00:08:41.415921   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 00:08:42.312367   21011 logs.go:123] Gathering logs for container status ...
	I0815 00:08:42.312426   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 00:08:42.370567   21011 logs.go:123] Gathering logs for describe nodes ...
	I0815 00:08:42.370599   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 00:08:42.511442   21011 logs.go:123] Gathering logs for dmesg ...
	I0815 00:08:42.511468   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 00:08:42.525611   21011 logs.go:123] Gathering logs for kube-apiserver [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f] ...
	I0815 00:08:42.525643   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f"
	I0815 00:08:42.576744   21011 logs.go:123] Gathering logs for kube-controller-manager [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2] ...
	I0815 00:08:42.576778   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2"
	I0815 00:08:42.636406   21011 logs.go:123] Gathering logs for kubelet ...
	I0815 00:08:42.636441   21011 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 00:08:45.225407   21011 system_pods.go:59] 18 kube-system pods found
	I0815 00:08:45.225437   21011 system_pods.go:61] "coredns-6f6b679f8f-52frj" [14443991-d0d3-4971-ace5-79219c17a3a4] Running
	I0815 00:08:45.225443   21011 system_pods.go:61] "csi-hostpath-attacher-0" [07bcc102-e23b-4e0f-b36a-83560a72e91f] Running
	I0815 00:08:45.225447   21011 system_pods.go:61] "csi-hostpath-resizer-0" [ae76d226-ea7b-4fcf-8713-0cfafece3e41] Running
	I0815 00:08:45.225450   21011 system_pods.go:61] "csi-hostpathplugin-5dp4z" [d97e647b-48bd-4f97-a7a7-9212f1ed9da6] Running
	I0815 00:08:45.225453   21011 system_pods.go:61] "etcd-addons-799058" [c6cb9162-e068-4148-9d9f-41f388239eb1] Running
	I0815 00:08:45.225456   21011 system_pods.go:61] "kube-apiserver-addons-799058" [861b1168-123e-40d8-b823-f643f214aafc] Running
	I0815 00:08:45.225459   21011 system_pods.go:61] "kube-controller-manager-addons-799058" [f960d7dd-d373-4498-a4fc-9ac1fd923b96] Running
	I0815 00:08:45.225462   21011 system_pods.go:61] "kube-ingress-dns-minikube" [b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa] Running
	I0815 00:08:45.225464   21011 system_pods.go:61] "kube-proxy-w8m2t" [26a17fd3-81aa-46a5-b148-82c4e3d16273] Running
	I0815 00:08:45.225467   21011 system_pods.go:61] "kube-scheduler-addons-799058" [2785a399-481e-4950-8779-b898b5f2a900] Running
	I0815 00:08:45.225471   21011 system_pods.go:61] "metrics-server-8988944d9-q4bwq" [95a56e8f-f680-4b31-bdc3-34e9e748a9b7] Running
	I0815 00:08:45.225474   21011 system_pods.go:61] "nvidia-device-plugin-daemonset-4jqvz" [86f19320-28d1-4fc0-9865-20a09c4e891a] Running
	I0815 00:08:45.225476   21011 system_pods.go:61] "registry-6fb4cdfc84-fwfvr" [0c0970af-9934-491e-bcfa-fa54ed7e0e3e] Running
	I0815 00:08:45.225479   21011 system_pods.go:61] "registry-proxy-kq9fl" [58301448-7012-48c0-8f9b-a5da1d7ebb5b] Running
	I0815 00:08:45.225481   21011 system_pods.go:61] "snapshot-controller-56fcc65765-9j9cr" [49b196b9-2c6f-4376-b6bd-25f7bcba9b02] Running
	I0815 00:08:45.225485   21011 system_pods.go:61] "snapshot-controller-56fcc65765-bbx2t" [ce67ca25-a279-4610-af34-e7d1aeb14426] Running
	I0815 00:08:45.225487   21011 system_pods.go:61] "storage-provisioner" [1409d83f-8419-4e70-9137-80faff3e10c2] Running
	I0815 00:08:45.225492   21011 system_pods.go:61] "tiller-deploy-b48cc5f79-xd29w" [792a4027-3c8e-4383-ae2c-9615a900c9a9] Running
	I0815 00:08:45.225500   21011 system_pods.go:74] duration metric: took 4.311197977s to wait for pod list to return data ...
	I0815 00:08:45.225507   21011 default_sa.go:34] waiting for default service account to be created ...
	I0815 00:08:45.227828   21011 default_sa.go:45] found service account: "default"
	I0815 00:08:45.227846   21011 default_sa.go:55] duration metric: took 2.332119ms for default service account to be created ...
	I0815 00:08:45.227853   21011 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 00:08:45.234250   21011 system_pods.go:86] 18 kube-system pods found
	I0815 00:08:45.234274   21011 system_pods.go:89] "coredns-6f6b679f8f-52frj" [14443991-d0d3-4971-ace5-79219c17a3a4] Running
	I0815 00:08:45.234279   21011 system_pods.go:89] "csi-hostpath-attacher-0" [07bcc102-e23b-4e0f-b36a-83560a72e91f] Running
	I0815 00:08:45.234283   21011 system_pods.go:89] "csi-hostpath-resizer-0" [ae76d226-ea7b-4fcf-8713-0cfafece3e41] Running
	I0815 00:08:45.234287   21011 system_pods.go:89] "csi-hostpathplugin-5dp4z" [d97e647b-48bd-4f97-a7a7-9212f1ed9da6] Running
	I0815 00:08:45.234290   21011 system_pods.go:89] "etcd-addons-799058" [c6cb9162-e068-4148-9d9f-41f388239eb1] Running
	I0815 00:08:45.234295   21011 system_pods.go:89] "kube-apiserver-addons-799058" [861b1168-123e-40d8-b823-f643f214aafc] Running
	I0815 00:08:45.234299   21011 system_pods.go:89] "kube-controller-manager-addons-799058" [f960d7dd-d373-4498-a4fc-9ac1fd923b96] Running
	I0815 00:08:45.234303   21011 system_pods.go:89] "kube-ingress-dns-minikube" [b07e0109-a1a5-4e02-9021-1dbd4e7cd3aa] Running
	I0815 00:08:45.234307   21011 system_pods.go:89] "kube-proxy-w8m2t" [26a17fd3-81aa-46a5-b148-82c4e3d16273] Running
	I0815 00:08:45.234310   21011 system_pods.go:89] "kube-scheduler-addons-799058" [2785a399-481e-4950-8779-b898b5f2a900] Running
	I0815 00:08:45.234314   21011 system_pods.go:89] "metrics-server-8988944d9-q4bwq" [95a56e8f-f680-4b31-bdc3-34e9e748a9b7] Running
	I0815 00:08:45.234318   21011 system_pods.go:89] "nvidia-device-plugin-daemonset-4jqvz" [86f19320-28d1-4fc0-9865-20a09c4e891a] Running
	I0815 00:08:45.234322   21011 system_pods.go:89] "registry-6fb4cdfc84-fwfvr" [0c0970af-9934-491e-bcfa-fa54ed7e0e3e] Running
	I0815 00:08:45.234325   21011 system_pods.go:89] "registry-proxy-kq9fl" [58301448-7012-48c0-8f9b-a5da1d7ebb5b] Running
	I0815 00:08:45.234334   21011 system_pods.go:89] "snapshot-controller-56fcc65765-9j9cr" [49b196b9-2c6f-4376-b6bd-25f7bcba9b02] Running
	I0815 00:08:45.234338   21011 system_pods.go:89] "snapshot-controller-56fcc65765-bbx2t" [ce67ca25-a279-4610-af34-e7d1aeb14426] Running
	I0815 00:08:45.234346   21011 system_pods.go:89] "storage-provisioner" [1409d83f-8419-4e70-9137-80faff3e10c2] Running
	I0815 00:08:45.234350   21011 system_pods.go:89] "tiller-deploy-b48cc5f79-xd29w" [792a4027-3c8e-4383-ae2c-9615a900c9a9] Running
	I0815 00:08:45.234355   21011 system_pods.go:126] duration metric: took 6.49824ms to wait for k8s-apps to be running ...
	I0815 00:08:45.234364   21011 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 00:08:45.234417   21011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:08:45.249947   21011 system_svc.go:56] duration metric: took 15.574951ms WaitForService to wait for kubelet
	I0815 00:08:45.249979   21011 kubeadm.go:582] duration metric: took 1m48.672886295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:08:45.249999   21011 node_conditions.go:102] verifying NodePressure condition ...
	I0815 00:08:45.253261   21011 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 00:08:45.253286   21011 node_conditions.go:123] node cpu capacity is 2
	I0815 00:08:45.253298   21011 node_conditions.go:105] duration metric: took 3.294781ms to run NodePressure ...
	I0815 00:08:45.253309   21011 start.go:241] waiting for startup goroutines ...
	I0815 00:08:45.253318   21011 start.go:246] waiting for cluster config update ...
	I0815 00:08:45.253333   21011 start.go:255] writing updated cluster config ...
	I0815 00:08:45.253606   21011 ssh_runner.go:195] Run: rm -f paused
	I0815 00:08:45.302478   21011 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 00:08:45.304086   21011 out.go:177] * Done! kubectl is now configured to use "addons-799058" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.803369611Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8b751f03a8aaeb6d913fcef3b55a8cb7b7d8d3adf01f79b98f9dca38194eef44,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-wbmmj,Uid:9cf92d0f-e40e-458e-a372-73ebae3a84db,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723680736934925880,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wbmmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf92d0f-e40e-458e-a372-73ebae3a84db,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T00:12:16.623522689Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0bfd4e7031a9c8a54520b52c1f1f4876bdca65f1068e4b82959f432fdaf19ebd,Metadata:&PodSandboxMetadata{Name:nginx,Uid:2dd945a2-dba6-4274-a0e9-67190b86b7cd,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1723680593233375868,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd945a2-dba6-4274-a0e9-67190b86b7cd,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T00:09:52.925082589Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d0c83e0816f9d3b95929a60f82b2b9f95e3ddf94d29e098b37f44ef8b65f3864,Metadata:&PodSandboxMetadata{Name:busybox,Uid:c0f417da-11f4-4f03-807b-3907aa99d556,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723680525884396479,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0f417da-11f4-4f03-807b-3907aa99d556,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T00:08:45.572270760Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19aaea48b156d2161b
6c06f271ad0d80bcc168ef452c2747c93d353e3ad6993a,Metadata:&PodSandboxMetadata{Name:metrics-server-8988944d9-q4bwq,Uid:95a56e8f-f680-4b31-bdc3-34e9e748a9b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723680422232986235,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-8988944d9-q4bwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95a56e8f-f680-4b31-bdc3-34e9e748a9b7,k8s-app: metrics-server,pod-template-hash: 8988944d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T00:07:01.923537885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dcc54c3df9e9df0a2a9fcaccc499d8435ec40c28e5ba805799ae2676e1684a9a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1409d83f-8419-4e70-9137-80faff3e10c2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723680421803452192,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1409d83f-8419-4e70-9137-80faff3e10c2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-15T00:07:01.135993072Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:26038c7838ab4d2249cd8f79252dd1277f3320ae02c47a3f56548de014e00beb,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-52frj,Uid:14443991-d0d3-4971-ace5-79219c17a3a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723680416752514620,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-52frj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14443991-d0d3-4971-ace5-79219c17a3a4,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T00:06:56.432499140Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e497bc0b1ae95488b150c129d9b38f44f18f7e679eb42d4974eee8b8594b5088,Metadata:&PodSandboxMetadata{Name:kube-proxy-w8m2t,Uid:26a17fd3-81aa-46a5-b148-82c4e3d16273,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723680416613206540,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubern
etes.pod.name: kube-proxy-w8m2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26a17fd3-81aa-46a5-b148-82c4e3d16273,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T00:06:56.292444430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e52eba7cb561b6f015b41eb6ca94ba7f5e285dbfbfa9bacf3eb6bcda5bf57e53,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-799058,Uid:ee8e58d0bf849a27c39cec9b48b924b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723680405840322734,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8e58d0bf849a27c39cec9b48b924b6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ee8e58d0bf849a27c39cec9b48b924b6,kubernetes.io/config.seen: 2024-08-15T00:06:45.369146660Z,kubernetes.io/config.source: fil
e,},RuntimeHandler:,},&PodSandbox{Id:ae520df873e65352f64bada52055f7f809db9c2806023f5bf2e7db1716cf26b9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-799058,Uid:43f04b6198bea76ee447b0b5034bae3f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723680405832218870,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43f04b6198bea76ee447b0b5034bae3f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.195:8443,kubernetes.io/config.hash: 43f04b6198bea76ee447b0b5034bae3f,kubernetes.io/config.seen: 2024-08-15T00:06:45.369144207Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7d2609d4df11f6f67438aa835d1caf7e97273ef3819e1c17a740fc3de977eb84,Metadata:&PodSandboxMetadata{Name:etcd-addons-799058,Uid:f5a0f5eb47e46aa4e2b3563c52b968db,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1723680405817169444,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5a0f5eb47e46aa4e2b3563c52b968db,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.195:2379,kubernetes.io/config.hash: f5a0f5eb47e46aa4e2b3563c52b968db,kubernetes.io/config.seen: 2024-08-15T00:06:45.369140572Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:291a455ba1d587df3700368aa2b28f312dcc1060f41632cba6cf40882d342036,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-799058,Uid:8687703bca7345532ca828a5340bd3f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723680405814484452,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-799058,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 8687703bca7345532ca828a5340bd3f4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8687703bca7345532ca828a5340bd3f4,kubernetes.io/config.seen: 2024-08-15T00:06:45.369145447Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=86947faf-0a42-4e47-a324-aa64393ba93f name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.804101366Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ea8044f-a217-433a-9d4d-ab7211febf4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.804171317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ea8044f-a217-433a-9d4d-ab7211febf4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.804431902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e79f8c796118e82d493dd3f3f0004ccd1dbc20302f74c98fb6ebb4bb19a9bf89,PodSandboxId:8b751f03a8aaeb6d913fcef3b55a8cb7b7d8d3adf01f79b98f9dca38194eef44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723680739397199003,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wbmmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf92d0f-e40e-458e-a372-73ebae3a84db,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7a23046585b55d48c3420a46d560ad8e2ea638f14610e1f6caab5556ae153,PodSandboxId:0bfd4e7031a9c8a54520b52c1f1f4876bdca65f1068e4b82959f432fdaf19ebd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723680597117008153,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd945a2-dba6-4274-a0e9-67190b86b7cd,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8914521ade9238fb75d858164bbe70559e5b8be3bdd47a2f6189b2e2da8c060a,PodSandboxId:d0c83e0816f9d3b95929a60f82b2b9f95e3ddf94d29e098b37f44ef8b65f3864,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723680528766647760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0f417da-11f4-4f03-8
07b-3907aa99d556,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc86ea9d9d136c114e1071f2b92608b2d9eb48a7a30b40dea8af85e8e3f87c1d,PodSandboxId:19aaea48b156d2161b6c06f271ad0d80bcc168ef452c2747c93d353e3ad6993a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723680444729279679,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-q4bwq,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 95a56e8f-f680-4b31-bdc3-34e9e748a9b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e32777771788cec98b92b985180c1cad8b8d5fa1b5f0b9c1db94c1dbb843290,PodSandboxId:dcc54c3df9e9df0a2a9fcaccc499d8435ec40c28e5ba805799ae2676e1684a9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723680422352680082,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1409d83f-8419-4e70-9137-80faff3e10c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc,PodSandboxId:26038c7838ab4d2249cd8f79252dd1277f3320ae02c47a3f56548de014e00beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723680419438251121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-52frj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14443991-d0d3-4971-ace5-79219c17a3a4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31,PodSandboxId:e497bc0b1ae95488b150c129d9b38f44f18f7e679eb42d4974eee8b8594b5088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723680416913909138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w8m2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26a17fd3-81aa-46a5-b148-82c4e3d16273,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559,PodSandboxId:e52eba7cb561b6f015b41eb6ca94ba7f5e285dbfbfa9bacf3eb6bcda5bf57e53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f
729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723680406048028580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8e58d0bf849a27c39cec9b48b924b6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45,PodSandboxId:7d2609d4df11f6f67438aa835d1caf7e97273ef3819e1c17a740fc3de977eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAI
NER_RUNNING,CreatedAt:1723680405997226798,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5a0f5eb47e46aa4e2b3563c52b968db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f,PodSandboxId:ae520df873e65352f64bada52055f7f809db9c2806023f5bf2e7db1716cf26b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:172368040600570
9507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43f04b6198bea76ee447b0b5034bae3f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2,PodSandboxId:291a455ba1d587df3700368aa2b28f312dcc1060f41632cba6cf40882d342036,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723680405977363934,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8687703bca7345532ca828a5340bd3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ea8044f-a217-433a-9d4d-ab7211febf4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.808915488Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87bc441c-b7df-4357-8b9f-a064a874aca6 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.808983907Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87bc441c-b7df-4357-8b9f-a064a874aca6 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.810017839Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b40e7e70-ad87-4c42-94b8-2128a7f2ad92 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.811323278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680833811293982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b40e7e70-ad87-4c42-94b8-2128a7f2ad92 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.812081515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbff0fde-5f87-4ce0-9470-9f04a701a172 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.812148405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbff0fde-5f87-4ce0-9470-9f04a701a172 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.812417738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e79f8c796118e82d493dd3f3f0004ccd1dbc20302f74c98fb6ebb4bb19a9bf89,PodSandboxId:8b751f03a8aaeb6d913fcef3b55a8cb7b7d8d3adf01f79b98f9dca38194eef44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723680739397199003,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wbmmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf92d0f-e40e-458e-a372-73ebae3a84db,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7a23046585b55d48c3420a46d560ad8e2ea638f14610e1f6caab5556ae153,PodSandboxId:0bfd4e7031a9c8a54520b52c1f1f4876bdca65f1068e4b82959f432fdaf19ebd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723680597117008153,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd945a2-dba6-4274-a0e9-67190b86b7cd,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8914521ade9238fb75d858164bbe70559e5b8be3bdd47a2f6189b2e2da8c060a,PodSandboxId:d0c83e0816f9d3b95929a60f82b2b9f95e3ddf94d29e098b37f44ef8b65f3864,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723680528766647760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0f417da-11f4-4f03-8
07b-3907aa99d556,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc86ea9d9d136c114e1071f2b92608b2d9eb48a7a30b40dea8af85e8e3f87c1d,PodSandboxId:19aaea48b156d2161b6c06f271ad0d80bcc168ef452c2747c93d353e3ad6993a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723680444729279679,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-q4bwq,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 95a56e8f-f680-4b31-bdc3-34e9e748a9b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e32777771788cec98b92b985180c1cad8b8d5fa1b5f0b9c1db94c1dbb843290,PodSandboxId:dcc54c3df9e9df0a2a9fcaccc499d8435ec40c28e5ba805799ae2676e1684a9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723680422352680082,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1409d83f-8419-4e70-9137-80faff3e10c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc,PodSandboxId:26038c7838ab4d2249cd8f79252dd1277f3320ae02c47a3f56548de014e00beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723680419438251121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-52frj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14443991-d0d3-4971-ace5-79219c17a3a4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31,PodSandboxId:e497bc0b1ae95488b150c129d9b38f44f18f7e679eb42d4974eee8b8594b5088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723680416913909138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w8m2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26a17fd3-81aa-46a5-b148-82c4e3d16273,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559,PodSandboxId:e52eba7cb561b6f015b41eb6ca94ba7f5e285dbfbfa9bacf3eb6bcda5bf57e53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f
729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723680406048028580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8e58d0bf849a27c39cec9b48b924b6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45,PodSandboxId:7d2609d4df11f6f67438aa835d1caf7e97273ef3819e1c17a740fc3de977eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAI
NER_RUNNING,CreatedAt:1723680405997226798,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5a0f5eb47e46aa4e2b3563c52b968db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f,PodSandboxId:ae520df873e65352f64bada52055f7f809db9c2806023f5bf2e7db1716cf26b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:172368040600570
9507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43f04b6198bea76ee447b0b5034bae3f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2,PodSandboxId:291a455ba1d587df3700368aa2b28f312dcc1060f41632cba6cf40882d342036,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723680405977363934,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8687703bca7345532ca828a5340bd3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbff0fde-5f87-4ce0-9470-9f04a701a172 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.847437630Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e959d3b-8b1a-46e9-a3c3-7c5d0acbe73f name=/runtime.v1.RuntimeService/Version
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.847524556Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e959d3b-8b1a-46e9-a3c3-7c5d0acbe73f name=/runtime.v1.RuntimeService/Version
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.848613725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9451c421-adf5-44aa-abe3-362ac914845a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.849997602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680833849972581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9451c421-adf5-44aa-abe3-362ac914845a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.850541653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90517d14-bb97-4b13-b029-b30fa7b64fef name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.850606462Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90517d14-bb97-4b13-b029-b30fa7b64fef name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.850893737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e79f8c796118e82d493dd3f3f0004ccd1dbc20302f74c98fb6ebb4bb19a9bf89,PodSandboxId:8b751f03a8aaeb6d913fcef3b55a8cb7b7d8d3adf01f79b98f9dca38194eef44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723680739397199003,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wbmmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf92d0f-e40e-458e-a372-73ebae3a84db,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7a23046585b55d48c3420a46d560ad8e2ea638f14610e1f6caab5556ae153,PodSandboxId:0bfd4e7031a9c8a54520b52c1f1f4876bdca65f1068e4b82959f432fdaf19ebd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723680597117008153,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd945a2-dba6-4274-a0e9-67190b86b7cd,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8914521ade9238fb75d858164bbe70559e5b8be3bdd47a2f6189b2e2da8c060a,PodSandboxId:d0c83e0816f9d3b95929a60f82b2b9f95e3ddf94d29e098b37f44ef8b65f3864,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723680528766647760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0f417da-11f4-4f03-8
07b-3907aa99d556,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc86ea9d9d136c114e1071f2b92608b2d9eb48a7a30b40dea8af85e8e3f87c1d,PodSandboxId:19aaea48b156d2161b6c06f271ad0d80bcc168ef452c2747c93d353e3ad6993a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723680444729279679,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-q4bwq,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 95a56e8f-f680-4b31-bdc3-34e9e748a9b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e32777771788cec98b92b985180c1cad8b8d5fa1b5f0b9c1db94c1dbb843290,PodSandboxId:dcc54c3df9e9df0a2a9fcaccc499d8435ec40c28e5ba805799ae2676e1684a9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723680422352680082,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1409d83f-8419-4e70-9137-80faff3e10c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc,PodSandboxId:26038c7838ab4d2249cd8f79252dd1277f3320ae02c47a3f56548de014e00beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723680419438251121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-52frj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14443991-d0d3-4971-ace5-79219c17a3a4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31,PodSandboxId:e497bc0b1ae95488b150c129d9b38f44f18f7e679eb42d4974eee8b8594b5088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723680416913909138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w8m2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26a17fd3-81aa-46a5-b148-82c4e3d16273,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559,PodSandboxId:e52eba7cb561b6f015b41eb6ca94ba7f5e285dbfbfa9bacf3eb6bcda5bf57e53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f
729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723680406048028580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8e58d0bf849a27c39cec9b48b924b6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45,PodSandboxId:7d2609d4df11f6f67438aa835d1caf7e97273ef3819e1c17a740fc3de977eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAI
NER_RUNNING,CreatedAt:1723680405997226798,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5a0f5eb47e46aa4e2b3563c52b968db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f,PodSandboxId:ae520df873e65352f64bada52055f7f809db9c2806023f5bf2e7db1716cf26b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:172368040600570
9507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43f04b6198bea76ee447b0b5034bae3f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2,PodSandboxId:291a455ba1d587df3700368aa2b28f312dcc1060f41632cba6cf40882d342036,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723680405977363934,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8687703bca7345532ca828a5340bd3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90517d14-bb97-4b13-b029-b30fa7b64fef name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.882912846Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62c031b1-aff6-4aec-aefc-d68519525498 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.882997205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62c031b1-aff6-4aec-aefc-d68519525498 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.884055376Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0163477f-b443-4979-b5b8-9dee60e46301 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.885244415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680833885217897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0163477f-b443-4979-b5b8-9dee60e46301 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.885983875Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b62bb771-f5b8-4ac1-9557-d9b4fbf3f67e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.886048437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b62bb771-f5b8-4ac1-9557-d9b4fbf3f67e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:13:53 addons-799058 crio[672]: time="2024-08-15 00:13:53.886291190Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e79f8c796118e82d493dd3f3f0004ccd1dbc20302f74c98fb6ebb4bb19a9bf89,PodSandboxId:8b751f03a8aaeb6d913fcef3b55a8cb7b7d8d3adf01f79b98f9dca38194eef44,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723680739397199003,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wbmmj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf92d0f-e40e-458e-a372-73ebae3a84db,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e7a23046585b55d48c3420a46d560ad8e2ea638f14610e1f6caab5556ae153,PodSandboxId:0bfd4e7031a9c8a54520b52c1f1f4876bdca65f1068e4b82959f432fdaf19ebd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723680597117008153,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd945a2-dba6-4274-a0e9-67190b86b7cd,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8914521ade9238fb75d858164bbe70559e5b8be3bdd47a2f6189b2e2da8c060a,PodSandboxId:d0c83e0816f9d3b95929a60f82b2b9f95e3ddf94d29e098b37f44ef8b65f3864,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723680528766647760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0f417da-11f4-4f03-8
07b-3907aa99d556,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc86ea9d9d136c114e1071f2b92608b2d9eb48a7a30b40dea8af85e8e3f87c1d,PodSandboxId:19aaea48b156d2161b6c06f271ad0d80bcc168ef452c2747c93d353e3ad6993a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723680444729279679,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-q4bwq,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 95a56e8f-f680-4b31-bdc3-34e9e748a9b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e32777771788cec98b92b985180c1cad8b8d5fa1b5f0b9c1db94c1dbb843290,PodSandboxId:dcc54c3df9e9df0a2a9fcaccc499d8435ec40c28e5ba805799ae2676e1684a9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723680422352680082,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1409d83f-8419-4e70-9137-80faff3e10c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc,PodSandboxId:26038c7838ab4d2249cd8f79252dd1277f3320ae02c47a3f56548de014e00beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723680419438251121,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-52frj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14443991-d0d3-4971-ace5-79219c17a3a4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31,PodSandboxId:e497bc0b1ae95488b150c129d9b38f44f18f7e679eb42d4974eee8b8594b5088,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723680416913909138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w8m2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26a17fd3-81aa-46a5-b148-82c4e3d16273,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559,PodSandboxId:e52eba7cb561b6f015b41eb6ca94ba7f5e285dbfbfa9bacf3eb6bcda5bf57e53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f
729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723680406048028580,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee8e58d0bf849a27c39cec9b48b924b6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45,PodSandboxId:7d2609d4df11f6f67438aa835d1caf7e97273ef3819e1c17a740fc3de977eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAI
NER_RUNNING,CreatedAt:1723680405997226798,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5a0f5eb47e46aa4e2b3563c52b968db,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f,PodSandboxId:ae520df873e65352f64bada52055f7f809db9c2806023f5bf2e7db1716cf26b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:172368040600570
9507,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43f04b6198bea76ee447b0b5034bae3f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2,PodSandboxId:291a455ba1d587df3700368aa2b28f312dcc1060f41632cba6cf40882d342036,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723680405977363934,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-799058,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8687703bca7345532ca828a5340bd3f4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b62bb771-f5b8-4ac1-9557-d9b4fbf3f67e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e79f8c796118e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   8b751f03a8aae       hello-world-app-55bf9c44b4-wbmmj
	20e7a23046585       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         3 minutes ago        Running             nginx                     0                   0bfd4e7031a9c       nginx
	8914521ade923       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago        Running             busybox                   0                   d0c83e0816f9d       busybox
	dc86ea9d9d136       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago        Running             metrics-server            0                   19aaea48b156d       metrics-server-8988944d9-q4bwq
	4e32777771788       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        6 minutes ago        Running             storage-provisioner       0                   dcc54c3df9e9d       storage-provisioner
	b93836edc2ea0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        6 minutes ago        Running             coredns                   0                   26038c7838ab4       coredns-6f6b679f8f-52frj
	1a5055649b6ad       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        6 minutes ago        Running             kube-proxy                0                   e497bc0b1ae95       kube-proxy-w8m2t
	807c4f41537ad       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        7 minutes ago        Running             kube-scheduler            0                   e52eba7cb561b       kube-scheduler-addons-799058
	fcfebfef6006c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        7 minutes ago        Running             kube-apiserver            0                   ae520df873e65       kube-apiserver-addons-799058
	976439828fd9f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago        Running             etcd                      0                   7d2609d4df11f       etcd-addons-799058
	169699f15f7ad       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        7 minutes ago        Running             kube-controller-manager   0                   291a455ba1d58       kube-controller-manager-addons-799058
	
	
	==> coredns [b93836edc2ea08743f22979960201e8c4fcfe54aaa6101985dac8ec18b6a44fc] <==
	[INFO] 10.244.0.7:44728 - 48913 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163735s
	[INFO] 10.244.0.7:33085 - 41949 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000162335s
	[INFO] 10.244.0.7:33085 - 26579 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000071213s
	[INFO] 10.244.0.7:42100 - 33841 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000181494s
	[INFO] 10.244.0.7:42100 - 42547 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081153s
	[INFO] 10.244.0.7:43066 - 4739 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125024s
	[INFO] 10.244.0.7:43066 - 13185 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080829s
	[INFO] 10.244.0.7:36814 - 26352 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000090989s
	[INFO] 10.244.0.7:36814 - 39148 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000036433s
	[INFO] 10.244.0.7:59349 - 13803 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000049202s
	[INFO] 10.244.0.7:59349 - 44268 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000036549s
	[INFO] 10.244.0.7:58584 - 43526 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046797s
	[INFO] 10.244.0.7:58584 - 20992 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000021114s
	[INFO] 10.244.0.7:41449 - 15767 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000034624s
	[INFO] 10.244.0.7:41449 - 45465 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000043147s
	[INFO] 10.244.0.22:34376 - 26710 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000460643s
	[INFO] 10.244.0.22:36728 - 44220 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00007705s
	[INFO] 10.244.0.22:56456 - 364 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100243s
	[INFO] 10.244.0.22:46575 - 63414 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000184612s
	[INFO] 10.244.0.22:49957 - 28793 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000144474s
	[INFO] 10.244.0.22:46582 - 43057 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121699s
	[INFO] 10.244.0.22:37055 - 23457 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000605161s
	[INFO] 10.244.0.22:51558 - 49034 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000506856s
	[INFO] 10.244.0.24:36290 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000265033s
	[INFO] 10.244.0.24:35067 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112312s
	
	
	==> describe nodes <==
	Name:               addons-799058
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-799058
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=addons-799058
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_06_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-799058
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:06:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-799058
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:13:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:12:28 +0000   Thu, 15 Aug 2024 00:06:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:12:28 +0000   Thu, 15 Aug 2024 00:06:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:12:28 +0000   Thu, 15 Aug 2024 00:06:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:12:28 +0000   Thu, 15 Aug 2024 00:06:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-799058
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a1aa125092d40769e61470729cb010e
	  System UUID:                5a1aa125-092d-4076-9e61-470729cb010e
	  Boot ID:                    b9c872b0-2204-4dd5-9cf2-48f47e734356
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  default                     hello-world-app-55bf9c44b4-wbmmj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 coredns-6f6b679f8f-52frj                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     6m58s
	  kube-system                 etcd-addons-799058                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m3s
	  kube-system                 kube-apiserver-addons-799058             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m3s
	  kube-system                 kube-controller-manager-addons-799058    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m3s
	  kube-system                 kube-proxy-w8m2t                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	  kube-system                 kube-scheduler-addons-799058             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	  kube-system                 metrics-server-8988944d9-q4bwq           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         6m53s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  7m9s (x8 over 7m9s)  kubelet          Node addons-799058 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m9s (x8 over 7m9s)  kubelet          Node addons-799058 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m9s (x7 over 7m9s)  kubelet          Node addons-799058 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m3s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m3s                 kubelet          Node addons-799058 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m3s                 kubelet          Node addons-799058 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m3s                 kubelet          Node addons-799058 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m2s                 kubelet          Node addons-799058 status is now: NodeReady
	  Normal  RegisteredNode           6m59s                node-controller  Node addons-799058 event: Registered Node addons-799058 in Controller
	
	
	==> dmesg <==
	[  +5.022722] kauditd_printk_skb: 138 callbacks suppressed
	[  +5.394243] kauditd_printk_skb: 57 callbacks suppressed
	[ +10.261879] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.772450] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.088064] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.252351] kauditd_printk_skb: 4 callbacks suppressed
	[Aug15 00:08] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.018775] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.954549] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.554755] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.334519] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.101248] kauditd_printk_skb: 38 callbacks suppressed
	[ +28.048616] kauditd_printk_skb: 7 callbacks suppressed
	[Aug15 00:09] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.662460] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.139342] kauditd_printk_skb: 36 callbacks suppressed
	[  +6.013746] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.186732] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.121671] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.338675] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.806288] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.724259] kauditd_printk_skb: 22 callbacks suppressed
	[Aug15 00:10] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.173074] kauditd_printk_skb: 10 callbacks suppressed
	[Aug15 00:12] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [976439828fd9f7223baffb5cfa9f4ea21860c29db36d108bc6fba819ea80eb45] <==
	{"level":"info","ts":"2024-08-15T00:08:14.763956Z","caller":"traceutil/trace.go:171","msg":"trace[231408599] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-8988944d9-q4bwq; range_end:; response_count:1; response_revision:1145; }","duration":"202.576938ms","start":"2024-08-15T00:08:14.561374Z","end":"2024-08-15T00:08:14.763951Z","steps":["trace[231408599] 'agreement among raft nodes before linearized reading'  (duration: 202.502563ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:08:14.764190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.989959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-08-15T00:08:14.764210Z","caller":"traceutil/trace.go:171","msg":"trace[1205138313] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1145; }","duration":"148.011889ms","start":"2024-08-15T00:08:14.616191Z","end":"2024-08-15T00:08:14.764203Z","steps":["trace[1205138313] 'agreement among raft nodes before linearized reading'  (duration: 147.940397ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:08:17.826297Z","caller":"traceutil/trace.go:171","msg":"trace[188591949] linearizableReadLoop","detail":"{readStateIndex:1190; appliedIndex:1189; }","duration":"265.092832ms","start":"2024-08-15T00:08:17.561182Z","end":"2024-08-15T00:08:17.826275Z","steps":["trace[188591949] 'read index received'  (duration: 265.075923ms)","trace[188591949] 'applied index is now lower than readState.Index'  (duration: 16.29µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:08:17.826473Z","caller":"traceutil/trace.go:171","msg":"trace[1923140916] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"299.78416ms","start":"2024-08-15T00:08:17.526544Z","end":"2024-08-15T00:08:17.826328Z","steps":["trace[1923140916] 'process raft request'  (duration: 299.64744ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:08:17.826567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.381748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:08:17.826613Z","caller":"traceutil/trace.go:171","msg":"trace[2015704096] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1159; }","duration":"265.425357ms","start":"2024-08-15T00:08:17.561178Z","end":"2024-08-15T00:08:17.826603Z","steps":["trace[2015704096] 'agreement among raft nodes before linearized reading'  (duration: 265.330725ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:08:17.827030Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:08:17.526518Z","time spent":"300.034608ms","remote":"127.0.0.1:52584","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1134 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-08-15T00:08:17.826494Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.290901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-8988944d9-q4bwq\" ","response":"range_response_count:1 size:4561"}
	{"level":"info","ts":"2024-08-15T00:08:17.827340Z","caller":"traceutil/trace.go:171","msg":"trace[281104173] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-8988944d9-q4bwq; range_end:; response_count:1; response_revision:1159; }","duration":"266.149096ms","start":"2024-08-15T00:08:17.561181Z","end":"2024-08-15T00:08:17.827330Z","steps":["trace[281104173] 'agreement among raft nodes before linearized reading'  (duration: 265.19053ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:08:45.172795Z","caller":"traceutil/trace.go:171","msg":"trace[1109562241] linearizableReadLoop","detail":"{readStateIndex:1299; appliedIndex:1298; }","duration":"278.273234ms","start":"2024-08-15T00:08:44.894485Z","end":"2024-08-15T00:08:45.172758Z","steps":["trace[1109562241] 'read index received'  (duration: 278.088927ms)","trace[1109562241] 'applied index is now lower than readState.Index'  (duration: 183.574µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:08:45.172979Z","caller":"traceutil/trace.go:171","msg":"trace[1567391735] transaction","detail":"{read_only:false; response_revision:1263; number_of_response:1; }","duration":"343.098032ms","start":"2024-08-15T00:08:44.829862Z","end":"2024-08-15T00:08:45.172960Z","steps":["trace[1567391735] 'process raft request'  (duration: 342.758287ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:08:45.173086Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.531333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:08:45.173111Z","caller":"traceutil/trace.go:171","msg":"trace[1733662269] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1263; }","duration":"278.628069ms","start":"2024-08-15T00:08:44.894476Z","end":"2024-08-15T00:08:45.173104Z","steps":["trace[1733662269] 'agreement among raft nodes before linearized reading'  (duration: 278.512026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:08:45.173140Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:08:44.829847Z","time spent":"343.166921ms","remote":"127.0.0.1:52584","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1254 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-08-15T00:08:45.173333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.798304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-15T00:08:45.173353Z","caller":"traceutil/trace.go:171","msg":"trace[927988656] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1263; }","duration":"269.819705ms","start":"2024-08-15T00:08:44.903527Z","end":"2024-08-15T00:08:45.173346Z","steps":["trace[927988656] 'agreement among raft nodes before linearized reading'  (duration: 269.737102ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:09:22.780505Z","caller":"traceutil/trace.go:171","msg":"trace[1279385666] transaction","detail":"{read_only:false; response_revision:1470; number_of_response:1; }","duration":"200.897506ms","start":"2024-08-15T00:09:22.579587Z","end":"2024-08-15T00:09:22.780484Z","steps":["trace[1279385666] 'process raft request'  (duration: 200.810075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:09:22.781072Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.864929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:1 size:883"}
	{"level":"info","ts":"2024-08-15T00:09:22.781110Z","caller":"traceutil/trace.go:171","msg":"trace[241520890] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:1; response_revision:1470; }","duration":"110.914999ms","start":"2024-08-15T00:09:22.670189Z","end":"2024-08-15T00:09:22.781104Z","steps":["trace[241520890] 'agreement among raft nodes before linearized reading'  (duration: 110.805673ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:09:22.780944Z","caller":"traceutil/trace.go:171","msg":"trace[405929852] linearizableReadLoop","detail":"{readStateIndex:1519; appliedIndex:1518; }","duration":"110.739586ms","start":"2024-08-15T00:09:22.670193Z","end":"2024-08-15T00:09:22.780932Z","steps":["trace[405929852] 'read index received'  (duration: 110.139039ms)","trace[405929852] 'applied index is now lower than readState.Index'  (duration: 599.342µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:09:22.929067Z","caller":"traceutil/trace.go:171","msg":"trace[1349584391] transaction","detail":"{read_only:false; response_revision:1471; number_of_response:1; }","duration":"146.401645ms","start":"2024-08-15T00:09:22.782652Z","end":"2024-08-15T00:09:22.929053Z","steps":["trace[1349584391] 'process raft request'  (duration: 144.805451ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:09:39.849095Z","caller":"traceutil/trace.go:171","msg":"trace[912834345] transaction","detail":"{read_only:false; response_revision:1659; number_of_response:1; }","duration":"397.754932ms","start":"2024-08-15T00:09:39.451326Z","end":"2024-08-15T00:09:39.849081Z","steps":["trace[912834345] 'process raft request'  (duration: 397.511348ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:09:39.849213Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:09:39.451307Z","time spent":"397.835978ms","remote":"127.0.0.1:52516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1641 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-15T00:10:33.462896Z","caller":"traceutil/trace.go:171","msg":"trace[352033245] transaction","detail":"{read_only:false; response_revision:1928; number_of_response:1; }","duration":"113.663358ms","start":"2024-08-15T00:10:33.349214Z","end":"2024-08-15T00:10:33.462877Z","steps":["trace[352033245] 'process raft request'  (duration: 113.547568ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:13:54 up 7 min,  0 users,  load average: 0.40, 0.68, 0.43
	Linux addons-799058 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fcfebfef6006cace0c7b56eab13a03bac93d710d647e100fbec827b53ffedf8f] <==
	I0815 00:08:32.321056       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0815 00:08:55.769007       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42540: use of closed network connection
	E0815 00:08:55.965205       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42562: use of closed network connection
	I0815 00:09:10.737834       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0815 00:09:11.777892       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0815 00:09:29.574393       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0815 00:09:35.764861       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.107.151"}
	I0815 00:09:51.090383       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:51.090463       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:51.131324       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:51.131435       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:51.133030       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:51.133076       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:51.141069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:51.141171       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 00:09:51.189368       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 00:09:51.189410       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0815 00:09:52.133941       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0815 00:09:52.189904       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0815 00:09:52.292393       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0815 00:09:52.803797       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0815 00:09:52.964637       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.230.173"}
	E0815 00:09:54.738976       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0815 00:10:01.002892       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.195:8443->10.244.0.32:54876: read: connection reset by peer
	I0815 00:12:16.810436       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.45.140"}
	
	
	==> kube-controller-manager [169699f15f7ad0b3dc46f97c2c04691edfe6ddadb6b91b59cec4964c0636b8e2] <==
	I0815 00:12:16.647897       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="76.57µs"
	I0815 00:12:19.066203       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0815 00:12:19.068932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7559cbf597" duration="5.246µs"
	I0815 00:12:19.072532       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0815 00:12:20.495383       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="7.176844ms"
	I0815 00:12:20.496216       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.484µs"
	I0815 00:12:28.248495       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-799058"
	I0815 00:12:29.138667       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0815 00:12:32.993133       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:12:32.993259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:12:42.482684       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:12:42.482891       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:12:44.537167       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:12:44.537305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:12:51.420098       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:12:51.420213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:13:22.331772       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:13:22.331943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:13:32.850636       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:13:32.850811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:13:37.468550       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:13:37.468610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 00:13:49.051406       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 00:13:49.051576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 00:13:52.948654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="9.539µs"
	
	
	==> kube-proxy [1a5055649b6ad4ba58bcd665649240ca6d86512964025b145411f4c6651c7c31] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 00:06:57.566316       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 00:06:57.594285       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	E0815 00:06:57.598973       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:06:57.673466       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 00:06:57.673528       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 00:06:57.673555       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:06:57.676679       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:06:57.676936       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:06:57.676947       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:06:57.683467       1 config.go:197] "Starting service config controller"
	I0815 00:06:57.683492       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:06:57.683517       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:06:57.683521       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:06:57.688459       1 config.go:326] "Starting node config controller"
	I0815 00:06:57.688470       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:06:57.785047       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:06:57.785084       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:06:57.788831       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [807c4f41537adde66a1079ba8ad8690151f5e844744701af8c1eca0a640b7559] <==
	W0815 00:06:48.645369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 00:06:48.645401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:48.645452       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:06:48.645476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:48.645585       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:06:48.645610       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 00:06:48.644509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 00:06:48.645776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.503442       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 00:06:49.503494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.544877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 00:06:49.544929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.567765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:06:49.567818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.679917       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:06:49.679967       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 00:06:49.681017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 00:06:49.681060       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.696782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:06:49.696846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.758045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 00:06:49.758099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 00:06:49.816840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 00:06:49.816933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 00:06:51.338222       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 00:12:51 addons-799058 kubelet[1228]: I0815 00:12:51.770072    1228 scope.go:117] "RemoveContainer" containerID="de790a53febea377b11276e0a41297b62d40f1771b20b93694b0bc964019409a"
	Aug 15 00:13:01 addons-799058 kubelet[1228]: E0815 00:13:01.355164    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680781354536240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:01 addons-799058 kubelet[1228]: E0815 00:13:01.355196    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680781354536240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:11 addons-799058 kubelet[1228]: E0815 00:13:11.359292    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680791358634816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:11 addons-799058 kubelet[1228]: E0815 00:13:11.359337    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680791358634816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:21 addons-799058 kubelet[1228]: E0815 00:13:21.361879    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680801361464054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:21 addons-799058 kubelet[1228]: E0815 00:13:21.361918    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680801361464054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:31 addons-799058 kubelet[1228]: E0815 00:13:31.364780    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680811364072129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:31 addons-799058 kubelet[1228]: E0815 00:13:31.365116    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680811364072129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:41 addons-799058 kubelet[1228]: E0815 00:13:41.368173    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680821367783166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:41 addons-799058 kubelet[1228]: E0815 00:13:41.368544    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680821367783166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:51 addons-799058 kubelet[1228]: E0815 00:13:51.165402    1228 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 00:13:51 addons-799058 kubelet[1228]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 00:13:51 addons-799058 kubelet[1228]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 00:13:51 addons-799058 kubelet[1228]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 00:13:51 addons-799058 kubelet[1228]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 00:13:51 addons-799058 kubelet[1228]: E0815 00:13:51.371625    1228 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680831371287243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:51 addons-799058 kubelet[1228]: E0815 00:13:51.371665    1228 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723680831371287243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:13:52 addons-799058 kubelet[1228]: I0815 00:13:52.969791    1228 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-wbmmj" podStartSLOduration=94.779189035 podStartE2EDuration="1m36.969704088s" podCreationTimestamp="2024-08-15 00:12:16 +0000 UTC" firstStartedPulling="2024-08-15 00:12:17.189068335 +0000 UTC m=+326.170644555" lastFinishedPulling="2024-08-15 00:12:19.379583387 +0000 UTC m=+328.361159608" observedRunningTime="2024-08-15 00:12:20.487955294 +0000 UTC m=+329.469531646" watchObservedRunningTime="2024-08-15 00:13:52.969704088 +0000 UTC m=+421.951280296"
	Aug 15 00:13:54 addons-799058 kubelet[1228]: I0815 00:13:54.316109    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/95a56e8f-f680-4b31-bdc3-34e9e748a9b7-tmp-dir\") pod \"95a56e8f-f680-4b31-bdc3-34e9e748a9b7\" (UID: \"95a56e8f-f680-4b31-bdc3-34e9e748a9b7\") "
	Aug 15 00:13:54 addons-799058 kubelet[1228]: I0815 00:13:54.316171    1228 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf75s\" (UniqueName: \"kubernetes.io/projected/95a56e8f-f680-4b31-bdc3-34e9e748a9b7-kube-api-access-jf75s\") pod \"95a56e8f-f680-4b31-bdc3-34e9e748a9b7\" (UID: \"95a56e8f-f680-4b31-bdc3-34e9e748a9b7\") "
	Aug 15 00:13:54 addons-799058 kubelet[1228]: I0815 00:13:54.316700    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a56e8f-f680-4b31-bdc3-34e9e748a9b7-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "95a56e8f-f680-4b31-bdc3-34e9e748a9b7" (UID: "95a56e8f-f680-4b31-bdc3-34e9e748a9b7"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 15 00:13:54 addons-799058 kubelet[1228]: I0815 00:13:54.319401    1228 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a56e8f-f680-4b31-bdc3-34e9e748a9b7-kube-api-access-jf75s" (OuterVolumeSpecName: "kube-api-access-jf75s") pod "95a56e8f-f680-4b31-bdc3-34e9e748a9b7" (UID: "95a56e8f-f680-4b31-bdc3-34e9e748a9b7"). InnerVolumeSpecName "kube-api-access-jf75s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 00:13:54 addons-799058 kubelet[1228]: I0815 00:13:54.416894    1228 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jf75s\" (UniqueName: \"kubernetes.io/projected/95a56e8f-f680-4b31-bdc3-34e9e748a9b7-kube-api-access-jf75s\") on node \"addons-799058\" DevicePath \"\""
	Aug 15 00:13:54 addons-799058 kubelet[1228]: I0815 00:13:54.416921    1228 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/95a56e8f-f680-4b31-bdc3-34e9e748a9b7-tmp-dir\") on node \"addons-799058\" DevicePath \"\""
	
	
	==> storage-provisioner [4e32777771788cec98b92b985180c1cad8b8d5fa1b5f0b9c1db94c1dbb843290] <==
	I0815 00:07:03.311878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 00:07:03.406460       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 00:07:03.413546       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 00:07:03.696704       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 00:07:03.700431       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b4105451-058f-494a-a107-b03c804af7c5", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-799058_f042fa5f-4ad4-487b-a158-668d79c9351b became leader
	I0815 00:07:03.704636       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-799058_f042fa5f-4ad4-487b-a158-668d79c9351b!
	I0815 00:07:03.808788       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-799058_f042fa5f-4ad4-487b-a158-668d79c9351b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-799058 -n addons-799058
helpers_test.go:261: (dbg) Run:  kubectl --context addons-799058 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (290.74s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-799058
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-799058: exit status 82 (2m0.460056074s)

                                                
                                                
-- stdout --
	* Stopping node "addons-799058"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-799058" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-799058
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-799058: exit status 11 (21.649146591s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.195:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-799058" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-799058
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-799058: exit status 11 (6.143439448s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.195:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-799058" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-799058
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-799058: exit status 11 (6.143156666s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.195:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-799058" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 node stop m02 -v=7 --alsologtostderr
E0815 00:25:22.499021   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:26:03.461339   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863044 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.443648129s)

                                                
                                                
-- stdout --
	* Stopping node "ha-863044-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:25:13.864525   34747 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:25:13.864673   34747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:25:13.864684   34747 out.go:304] Setting ErrFile to fd 2...
	I0815 00:25:13.864690   34747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:25:13.864928   34747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:25:13.865167   34747 mustload.go:65] Loading cluster: ha-863044
	I0815 00:25:13.866334   34747 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:25:13.866426   34747 stop.go:39] StopHost: ha-863044-m02
	I0815 00:25:13.867134   34747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:25:13.867176   34747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:25:13.883696   34747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33969
	I0815 00:25:13.884105   34747 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:25:13.884650   34747 main.go:141] libmachine: Using API Version  1
	I0815 00:25:13.884706   34747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:25:13.885023   34747 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:25:13.887300   34747 out.go:177] * Stopping node "ha-863044-m02"  ...
	I0815 00:25:13.888470   34747 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 00:25:13.888499   34747 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:25:13.888718   34747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 00:25:13.888747   34747 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:25:13.891801   34747 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:25:13.892215   34747 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:25:13.892241   34747 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:25:13.892403   34747 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:25:13.892580   34747 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:25:13.892748   34747 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:25:13.892888   34747 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	I0815 00:25:13.974714   34747 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 00:25:14.027299   34747 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 00:25:14.079896   34747 main.go:141] libmachine: Stopping "ha-863044-m02"...
	I0815 00:25:14.079925   34747 main.go:141] libmachine: (ha-863044-m02) Calling .GetState
	I0815 00:25:14.081388   34747 main.go:141] libmachine: (ha-863044-m02) Calling .Stop
	I0815 00:25:14.085456   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 0/120
	I0815 00:25:15.087291   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 1/120
	I0815 00:25:16.088416   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 2/120
	I0815 00:25:17.090417   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 3/120
	I0815 00:25:18.091672   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 4/120
	I0815 00:25:19.093513   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 5/120
	I0815 00:25:20.094904   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 6/120
	I0815 00:25:21.096029   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 7/120
	I0815 00:25:22.097353   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 8/120
	I0815 00:25:23.098814   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 9/120
	I0815 00:25:24.100522   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 10/120
	I0815 00:25:25.101807   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 11/120
	I0815 00:25:26.103252   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 12/120
	I0815 00:25:27.104535   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 13/120
	I0815 00:25:28.105810   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 14/120
	I0815 00:25:29.107446   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 15/120
	I0815 00:25:30.109185   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 16/120
	I0815 00:25:31.111094   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 17/120
	I0815 00:25:32.112535   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 18/120
	I0815 00:25:33.113757   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 19/120
	I0815 00:25:34.115852   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 20/120
	I0815 00:25:35.117131   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 21/120
	I0815 00:25:36.118418   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 22/120
	I0815 00:25:37.119805   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 23/120
	I0815 00:25:38.121072   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 24/120
	I0815 00:25:39.122964   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 25/120
	I0815 00:25:40.124225   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 26/120
	I0815 00:25:41.125981   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 27/120
	I0815 00:25:42.127685   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 28/120
	I0815 00:25:43.129198   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 29/120
	I0815 00:25:44.131036   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 30/120
	I0815 00:25:45.133267   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 31/120
	I0815 00:25:46.135243   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 32/120
	I0815 00:25:47.136489   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 33/120
	I0815 00:25:48.137734   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 34/120
	I0815 00:25:49.139735   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 35/120
	I0815 00:25:50.141174   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 36/120
	I0815 00:25:51.143272   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 37/120
	I0815 00:25:52.144781   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 38/120
	I0815 00:25:53.147128   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 39/120
	I0815 00:25:54.149231   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 40/120
	I0815 00:25:55.150907   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 41/120
	I0815 00:25:56.152242   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 42/120
	I0815 00:25:57.153596   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 43/120
	I0815 00:25:58.155249   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 44/120
	I0815 00:25:59.156675   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 45/120
	I0815 00:26:00.157859   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 46/120
	I0815 00:26:01.159169   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 47/120
	I0815 00:26:02.160355   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 48/120
	I0815 00:26:03.161764   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 49/120
	I0815 00:26:04.163971   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 50/120
	I0815 00:26:05.165284   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 51/120
	I0815 00:26:06.166593   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 52/120
	I0815 00:26:07.168422   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 53/120
	I0815 00:26:08.169678   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 54/120
	I0815 00:26:09.171730   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 55/120
	I0815 00:26:10.173023   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 56/120
	I0815 00:26:11.174288   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 57/120
	I0815 00:26:12.175891   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 58/120
	I0815 00:26:13.177179   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 59/120
	I0815 00:26:14.178968   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 60/120
	I0815 00:26:15.180225   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 61/120
	I0815 00:26:16.181595   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 62/120
	I0815 00:26:17.182799   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 63/120
	I0815 00:26:18.184109   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 64/120
	I0815 00:26:19.185921   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 65/120
	I0815 00:26:20.187356   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 66/120
	I0815 00:26:21.188751   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 67/120
	I0815 00:26:22.190060   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 68/120
	I0815 00:26:23.191348   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 69/120
	I0815 00:26:24.193408   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 70/120
	I0815 00:26:25.195514   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 71/120
	I0815 00:26:26.196872   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 72/120
	I0815 00:26:27.198152   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 73/120
	I0815 00:26:28.199269   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 74/120
	I0815 00:26:29.200546   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 75/120
	I0815 00:26:30.201758   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 76/120
	I0815 00:26:31.203197   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 77/120
	I0815 00:26:32.204443   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 78/120
	I0815 00:26:33.205680   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 79/120
	I0815 00:26:34.207647   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 80/120
	I0815 00:26:35.208886   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 81/120
	I0815 00:26:36.211162   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 82/120
	I0815 00:26:37.212545   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 83/120
	I0815 00:26:38.213769   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 84/120
	I0815 00:26:39.215686   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 85/120
	I0815 00:26:40.217039   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 86/120
	I0815 00:26:41.219262   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 87/120
	I0815 00:26:42.220555   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 88/120
	I0815 00:26:43.222015   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 89/120
	I0815 00:26:44.224043   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 90/120
	I0815 00:26:45.225352   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 91/120
	I0815 00:26:46.226518   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 92/120
	I0815 00:26:47.227777   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 93/120
	I0815 00:26:48.229105   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 94/120
	I0815 00:26:49.230915   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 95/120
	I0815 00:26:50.232486   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 96/120
	I0815 00:26:51.233796   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 97/120
	I0815 00:26:52.235029   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 98/120
	I0815 00:26:53.236219   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 99/120
	I0815 00:26:54.238205   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 100/120
	I0815 00:26:55.239676   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 101/120
	I0815 00:26:56.241276   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 102/120
	I0815 00:26:57.243320   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 103/120
	I0815 00:26:58.244637   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 104/120
	I0815 00:26:59.245880   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 105/120
	I0815 00:27:00.247588   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 106/120
	I0815 00:27:01.248872   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 107/120
	I0815 00:27:02.250964   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 108/120
	I0815 00:27:03.252273   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 109/120
	I0815 00:27:04.254499   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 110/120
	I0815 00:27:05.255785   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 111/120
	I0815 00:27:06.257226   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 112/120
	I0815 00:27:07.258559   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 113/120
	I0815 00:27:08.259796   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 114/120
	I0815 00:27:09.261529   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 115/120
	I0815 00:27:10.263222   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 116/120
	I0815 00:27:11.264386   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 117/120
	I0815 00:27:12.265822   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 118/120
	I0815 00:27:13.266988   34747 main.go:141] libmachine: (ha-863044-m02) Waiting for machine to stop 119/120
	I0815 00:27:14.268184   34747 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 00:27:14.268339   34747 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-863044 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
E0815 00:27:25.383613   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr: exit status 3 (19.013587631s)

                                                
                                                
-- stdout --
	ha-863044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863044-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:27:14.310825   35177 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:27:14.311090   35177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:27:14.311098   35177 out.go:304] Setting ErrFile to fd 2...
	I0815 00:27:14.311103   35177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:27:14.311343   35177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:27:14.311555   35177 out.go:298] Setting JSON to false
	I0815 00:27:14.311582   35177 mustload.go:65] Loading cluster: ha-863044
	I0815 00:27:14.311665   35177 notify.go:220] Checking for updates...
	I0815 00:27:14.311998   35177 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:27:14.312021   35177 status.go:255] checking status of ha-863044 ...
	I0815 00:27:14.312467   35177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:14.312528   35177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:14.330823   35177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0815 00:27:14.331291   35177 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:14.331963   35177 main.go:141] libmachine: Using API Version  1
	I0815 00:27:14.331985   35177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:14.332367   35177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:14.332556   35177 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:27:14.334483   35177 status.go:330] ha-863044 host status = "Running" (err=<nil>)
	I0815 00:27:14.334505   35177 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:27:14.334813   35177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:14.334847   35177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:14.349793   35177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34661
	I0815 00:27:14.350134   35177 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:14.350612   35177 main.go:141] libmachine: Using API Version  1
	I0815 00:27:14.350634   35177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:14.350892   35177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:14.351054   35177 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:27:14.353540   35177 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:14.353935   35177 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:27:14.353970   35177 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:14.354099   35177 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:27:14.354498   35177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:14.354562   35177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:14.368572   35177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43449
	I0815 00:27:14.369113   35177 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:14.369606   35177 main.go:141] libmachine: Using API Version  1
	I0815 00:27:14.369638   35177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:14.369991   35177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:14.370198   35177 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:27:14.370405   35177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:14.370444   35177 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:27:14.372906   35177 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:14.373264   35177 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:27:14.373289   35177 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:14.373425   35177 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:27:14.373595   35177 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:27:14.373750   35177 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:27:14.373876   35177 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:27:14.461118   35177 ssh_runner.go:195] Run: systemctl --version
	I0815 00:27:14.472465   35177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:14.489263   35177 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:27:14.489300   35177 api_server.go:166] Checking apiserver status ...
	I0815 00:27:14.489333   35177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:27:14.504054   35177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup
	W0815 00:27:14.513509   35177 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:27:14.513572   35177 ssh_runner.go:195] Run: ls
	I0815 00:27:14.517544   35177 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:27:14.521494   35177 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:27:14.521516   35177 status.go:422] ha-863044 apiserver status = Running (err=<nil>)
	I0815 00:27:14.521529   35177 status.go:257] ha-863044 status: &{Name:ha-863044 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:27:14.521576   35177 status.go:255] checking status of ha-863044-m02 ...
	I0815 00:27:14.521901   35177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:14.521934   35177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:14.536327   35177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33645
	I0815 00:27:14.536682   35177 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:14.537110   35177 main.go:141] libmachine: Using API Version  1
	I0815 00:27:14.537130   35177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:14.537408   35177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:14.537610   35177 main.go:141] libmachine: (ha-863044-m02) Calling .GetState
	I0815 00:27:14.539100   35177 status.go:330] ha-863044-m02 host status = "Running" (err=<nil>)
	I0815 00:27:14.539116   35177 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:27:14.539453   35177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:14.539502   35177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:14.553794   35177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0815 00:27:14.554105   35177 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:14.554518   35177 main.go:141] libmachine: Using API Version  1
	I0815 00:27:14.554538   35177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:14.554808   35177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:14.554985   35177 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:27:14.557540   35177 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:14.557996   35177 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:27:14.558022   35177 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:14.558164   35177 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:27:14.558447   35177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:14.558493   35177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:14.573509   35177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38391
	I0815 00:27:14.573882   35177 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:14.574309   35177 main.go:141] libmachine: Using API Version  1
	I0815 00:27:14.574331   35177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:14.574604   35177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:14.574779   35177 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:27:14.574950   35177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:14.574969   35177 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:27:14.577597   35177 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:14.577977   35177 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:27:14.578000   35177 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:14.578129   35177 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:27:14.578272   35177 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:27:14.578362   35177 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:27:14.578474   35177 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	W0815 00:27:32.932853   35177 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.170:22: connect: no route to host
	W0815 00:27:32.932983   35177 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	E0815 00:27:32.933007   35177 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:27:32.933017   35177 status.go:257] ha-863044-m02 status: &{Name:ha-863044-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 00:27:32.933040   35177 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:27:32.933047   35177 status.go:255] checking status of ha-863044-m03 ...
	I0815 00:27:32.933469   35177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:32.933530   35177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:32.948868   35177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0815 00:27:32.949259   35177 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:32.949734   35177 main.go:141] libmachine: Using API Version  1
	I0815 00:27:32.949753   35177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:32.950024   35177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:32.950203   35177 main.go:141] libmachine: (ha-863044-m03) Calling .GetState
	I0815 00:27:32.951747   35177 status.go:330] ha-863044-m03 host status = "Running" (err=<nil>)
	I0815 00:27:32.951765   35177 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:27:32.952205   35177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:32.952249   35177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:32.966636   35177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43695
	I0815 00:27:32.967011   35177 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:32.967464   35177 main.go:141] libmachine: Using API Version  1
	I0815 00:27:32.967487   35177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:32.967765   35177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:32.967972   35177 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:27:32.970544   35177 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:32.970950   35177 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:27:32.970977   35177 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:32.971075   35177 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:27:32.971385   35177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:32.971421   35177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:32.986427   35177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40605
	I0815 00:27:32.986766   35177 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:32.987238   35177 main.go:141] libmachine: Using API Version  1
	I0815 00:27:32.987257   35177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:32.987522   35177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:32.987694   35177 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:27:32.987847   35177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:32.987878   35177 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:27:32.990530   35177 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:32.990939   35177 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:27:32.990968   35177 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:32.991123   35177 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:27:32.991279   35177 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:27:32.991420   35177 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:27:32.991576   35177 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:27:33.073277   35177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:33.089778   35177 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:27:33.089808   35177 api_server.go:166] Checking apiserver status ...
	I0815 00:27:33.089842   35177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:27:33.103393   35177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	W0815 00:27:33.112467   35177 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:27:33.112522   35177 ssh_runner.go:195] Run: ls
	I0815 00:27:33.116276   35177 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:27:33.120424   35177 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:27:33.120444   35177 status.go:422] ha-863044-m03 apiserver status = Running (err=<nil>)
	I0815 00:27:33.120454   35177 status.go:257] ha-863044-m03 status: &{Name:ha-863044-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:27:33.120474   35177 status.go:255] checking status of ha-863044-m04 ...
	I0815 00:27:33.120784   35177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:33.120825   35177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:33.135873   35177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45745
	I0815 00:27:33.136229   35177 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:33.136687   35177 main.go:141] libmachine: Using API Version  1
	I0815 00:27:33.136711   35177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:33.137001   35177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:33.137164   35177 main.go:141] libmachine: (ha-863044-m04) Calling .GetState
	I0815 00:27:33.138618   35177 status.go:330] ha-863044-m04 host status = "Running" (err=<nil>)
	I0815 00:27:33.138631   35177 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:27:33.138887   35177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:33.138918   35177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:33.155799   35177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0815 00:27:33.156212   35177 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:33.156707   35177 main.go:141] libmachine: Using API Version  1
	I0815 00:27:33.156740   35177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:33.157106   35177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:33.157298   35177 main.go:141] libmachine: (ha-863044-m04) Calling .GetIP
	I0815 00:27:33.160135   35177 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:33.160515   35177 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:27:33.160535   35177 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:33.160670   35177 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:27:33.161012   35177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:33.161069   35177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:33.176097   35177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0815 00:27:33.176455   35177 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:33.176917   35177 main.go:141] libmachine: Using API Version  1
	I0815 00:27:33.176940   35177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:33.177365   35177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:33.177560   35177 main.go:141] libmachine: (ha-863044-m04) Calling .DriverName
	I0815 00:27:33.177756   35177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:33.177778   35177 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHHostname
	I0815 00:27:33.180265   35177 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:33.180671   35177 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:27:33.180697   35177 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:33.180853   35177 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHPort
	I0815 00:27:33.181024   35177 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHKeyPath
	I0815 00:27:33.181172   35177 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHUsername
	I0815 00:27:33.181296   35177 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m04/id_rsa Username:docker}
	I0815 00:27:33.266008   35177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:33.280589   35177 status.go:257] ha-863044-m04 status: &{Name:ha-863044-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-863044 -n ha-863044
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-863044 logs -n 25: (1.262782321s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3188715365/001/cp-test_ha-863044-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044:/home/docker/cp-test_ha-863044-m03_ha-863044.txt                       |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044 sudo cat                                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m03_ha-863044.txt                                 |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m02:/home/docker/cp-test_ha-863044-m03_ha-863044-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m02 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m03_ha-863044-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04:/home/docker/cp-test_ha-863044-m03_ha-863044-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m04 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m03_ha-863044-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-863044 cp testdata/cp-test.txt                                                | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3188715365/001/cp-test_ha-863044-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044:/home/docker/cp-test_ha-863044-m04_ha-863044.txt                       |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044 sudo cat                                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m04_ha-863044.txt                                 |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m02:/home/docker/cp-test_ha-863044-m04_ha-863044-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m02 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m04_ha-863044-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03:/home/docker/cp-test_ha-863044-m04_ha-863044-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m03 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m04_ha-863044-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-863044 node stop m02 -v=7                                                     | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:20:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:20:37.881748   30723 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:20:37.881988   30723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:20:37.881995   30723 out.go:304] Setting ErrFile to fd 2...
	I0815 00:20:37.881999   30723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:20:37.882201   30723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:20:37.882746   30723 out.go:298] Setting JSON to false
	I0815 00:20:37.883560   30723 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3783,"bootTime":1723677455,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:20:37.883615   30723 start.go:139] virtualization: kvm guest
	I0815 00:20:37.885864   30723 out.go:177] * [ha-863044] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:20:37.887153   30723 notify.go:220] Checking for updates...
	I0815 00:20:37.887173   30723 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:20:37.888629   30723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:20:37.890054   30723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:20:37.891426   30723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:20:37.892691   30723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:20:37.894038   30723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:20:37.895541   30723 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:20:37.930133   30723 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 00:20:37.931685   30723 start.go:297] selected driver: kvm2
	I0815 00:20:37.931696   30723 start.go:901] validating driver "kvm2" against <nil>
	I0815 00:20:37.931714   30723 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:20:37.932433   30723 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:20:37.932500   30723 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 00:20:37.947617   30723 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 00:20:37.947667   30723 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:20:37.947865   30723 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:20:37.947924   30723 cni.go:84] Creating CNI manager for ""
	I0815 00:20:37.947935   30723 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0815 00:20:37.947940   30723 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 00:20:37.947987   30723 start.go:340] cluster config:
	{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0815 00:20:37.948079   30723 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:20:37.949981   30723 out.go:177] * Starting "ha-863044" primary control-plane node in "ha-863044" cluster
	I0815 00:20:37.951405   30723 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:20:37.951428   30723 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:20:37.951435   30723 cache.go:56] Caching tarball of preloaded images
	I0815 00:20:37.951509   30723 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:20:37.951518   30723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:20:37.951836   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:20:37.951856   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json: {Name:mkc2ad5323f3c8995300a3bc69f9d801a70bd1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:20:37.951994   30723 start.go:360] acquireMachinesLock for ha-863044: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 00:20:37.952020   30723 start.go:364] duration metric: took 14.311µs to acquireMachinesLock for "ha-863044"
	I0815 00:20:37.952035   30723 start.go:93] Provisioning new machine with config: &{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:20:37.952080   30723 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 00:20:37.953646   30723 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 00:20:37.953774   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:20:37.953808   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:20:37.967545   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42353
	I0815 00:20:37.967960   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:20:37.968468   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:20:37.968511   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:20:37.968850   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:20:37.969020   30723 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:20:37.969137   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:20:37.969263   30723 start.go:159] libmachine.API.Create for "ha-863044" (driver="kvm2")
	I0815 00:20:37.969294   30723 client.go:168] LocalClient.Create starting
	I0815 00:20:37.969328   30723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem
	I0815 00:20:37.969362   30723 main.go:141] libmachine: Decoding PEM data...
	I0815 00:20:37.969377   30723 main.go:141] libmachine: Parsing certificate...
	I0815 00:20:37.969430   30723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem
	I0815 00:20:37.969453   30723 main.go:141] libmachine: Decoding PEM data...
	I0815 00:20:37.969472   30723 main.go:141] libmachine: Parsing certificate...
	I0815 00:20:37.969493   30723 main.go:141] libmachine: Running pre-create checks...
	I0815 00:20:37.969502   30723 main.go:141] libmachine: (ha-863044) Calling .PreCreateCheck
	I0815 00:20:37.969775   30723 main.go:141] libmachine: (ha-863044) Calling .GetConfigRaw
	I0815 00:20:37.970350   30723 main.go:141] libmachine: Creating machine...
	I0815 00:20:37.970364   30723 main.go:141] libmachine: (ha-863044) Calling .Create
	I0815 00:20:37.970467   30723 main.go:141] libmachine: (ha-863044) Creating KVM machine...
	I0815 00:20:37.971753   30723 main.go:141] libmachine: (ha-863044) DBG | found existing default KVM network
	I0815 00:20:37.972324   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:37.972194   30746 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0815 00:20:37.972351   30723 main.go:141] libmachine: (ha-863044) DBG | created network xml: 
	I0815 00:20:37.972371   30723 main.go:141] libmachine: (ha-863044) DBG | <network>
	I0815 00:20:37.972382   30723 main.go:141] libmachine: (ha-863044) DBG |   <name>mk-ha-863044</name>
	I0815 00:20:37.972396   30723 main.go:141] libmachine: (ha-863044) DBG |   <dns enable='no'/>
	I0815 00:20:37.972405   30723 main.go:141] libmachine: (ha-863044) DBG |   
	I0815 00:20:37.972418   30723 main.go:141] libmachine: (ha-863044) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 00:20:37.972428   30723 main.go:141] libmachine: (ha-863044) DBG |     <dhcp>
	I0815 00:20:37.972440   30723 main.go:141] libmachine: (ha-863044) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 00:20:37.972457   30723 main.go:141] libmachine: (ha-863044) DBG |     </dhcp>
	I0815 00:20:37.972474   30723 main.go:141] libmachine: (ha-863044) DBG |   </ip>
	I0815 00:20:37.972484   30723 main.go:141] libmachine: (ha-863044) DBG |   
	I0815 00:20:37.972494   30723 main.go:141] libmachine: (ha-863044) DBG | </network>
	I0815 00:20:37.972503   30723 main.go:141] libmachine: (ha-863044) DBG | 
	I0815 00:20:37.977541   30723 main.go:141] libmachine: (ha-863044) DBG | trying to create private KVM network mk-ha-863044 192.168.39.0/24...
	I0815 00:20:38.042063   30723 main.go:141] libmachine: (ha-863044) DBG | private KVM network mk-ha-863044 192.168.39.0/24 created
	I0815 00:20:38.042092   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:38.042022   30746 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:20:38.042105   30723 main.go:141] libmachine: (ha-863044) Setting up store path in /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044 ...
	I0815 00:20:38.042118   30723 main.go:141] libmachine: (ha-863044) Building disk image from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 00:20:38.042177   30723 main.go:141] libmachine: (ha-863044) Downloading /home/jenkins/minikube-integration/19443-13088/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 00:20:38.290980   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:38.290871   30746 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa...
	I0815 00:20:38.474892   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:38.474766   30746 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/ha-863044.rawdisk...
	I0815 00:20:38.474942   30723 main.go:141] libmachine: (ha-863044) DBG | Writing magic tar header
	I0815 00:20:38.474957   30723 main.go:141] libmachine: (ha-863044) DBG | Writing SSH key tar header
	I0815 00:20:38.474968   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:38.474904   30746 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044 ...
	I0815 00:20:38.475113   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044
	I0815 00:20:38.475151   30723 main.go:141] libmachine: (ha-863044) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044 (perms=drwx------)
	I0815 00:20:38.475178   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines
	I0815 00:20:38.475190   30723 main.go:141] libmachine: (ha-863044) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines (perms=drwxr-xr-x)
	I0815 00:20:38.475200   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:20:38.475218   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088
	I0815 00:20:38.475230   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 00:20:38.475245   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home/jenkins
	I0815 00:20:38.475256   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home
	I0815 00:20:38.475265   30723 main.go:141] libmachine: (ha-863044) DBG | Skipping /home - not owner
	I0815 00:20:38.475279   30723 main.go:141] libmachine: (ha-863044) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube (perms=drwxr-xr-x)
	I0815 00:20:38.475296   30723 main.go:141] libmachine: (ha-863044) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088 (perms=drwxrwxr-x)
	I0815 00:20:38.475306   30723 main.go:141] libmachine: (ha-863044) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 00:20:38.475316   30723 main.go:141] libmachine: (ha-863044) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 00:20:38.475326   30723 main.go:141] libmachine: (ha-863044) Creating domain...
	I0815 00:20:38.476266   30723 main.go:141] libmachine: (ha-863044) define libvirt domain using xml: 
	I0815 00:20:38.476283   30723 main.go:141] libmachine: (ha-863044) <domain type='kvm'>
	I0815 00:20:38.476303   30723 main.go:141] libmachine: (ha-863044)   <name>ha-863044</name>
	I0815 00:20:38.476312   30723 main.go:141] libmachine: (ha-863044)   <memory unit='MiB'>2200</memory>
	I0815 00:20:38.476320   30723 main.go:141] libmachine: (ha-863044)   <vcpu>2</vcpu>
	I0815 00:20:38.476327   30723 main.go:141] libmachine: (ha-863044)   <features>
	I0815 00:20:38.476331   30723 main.go:141] libmachine: (ha-863044)     <acpi/>
	I0815 00:20:38.476336   30723 main.go:141] libmachine: (ha-863044)     <apic/>
	I0815 00:20:38.476347   30723 main.go:141] libmachine: (ha-863044)     <pae/>
	I0815 00:20:38.476354   30723 main.go:141] libmachine: (ha-863044)     
	I0815 00:20:38.476363   30723 main.go:141] libmachine: (ha-863044)   </features>
	I0815 00:20:38.476377   30723 main.go:141] libmachine: (ha-863044)   <cpu mode='host-passthrough'>
	I0815 00:20:38.476394   30723 main.go:141] libmachine: (ha-863044)   
	I0815 00:20:38.476402   30723 main.go:141] libmachine: (ha-863044)   </cpu>
	I0815 00:20:38.476407   30723 main.go:141] libmachine: (ha-863044)   <os>
	I0815 00:20:38.476412   30723 main.go:141] libmachine: (ha-863044)     <type>hvm</type>
	I0815 00:20:38.476422   30723 main.go:141] libmachine: (ha-863044)     <boot dev='cdrom'/>
	I0815 00:20:38.476428   30723 main.go:141] libmachine: (ha-863044)     <boot dev='hd'/>
	I0815 00:20:38.476438   30723 main.go:141] libmachine: (ha-863044)     <bootmenu enable='no'/>
	I0815 00:20:38.476444   30723 main.go:141] libmachine: (ha-863044)   </os>
	I0815 00:20:38.476453   30723 main.go:141] libmachine: (ha-863044)   <devices>
	I0815 00:20:38.476461   30723 main.go:141] libmachine: (ha-863044)     <disk type='file' device='cdrom'>
	I0815 00:20:38.476472   30723 main.go:141] libmachine: (ha-863044)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/boot2docker.iso'/>
	I0815 00:20:38.476480   30723 main.go:141] libmachine: (ha-863044)       <target dev='hdc' bus='scsi'/>
	I0815 00:20:38.476485   30723 main.go:141] libmachine: (ha-863044)       <readonly/>
	I0815 00:20:38.476489   30723 main.go:141] libmachine: (ha-863044)     </disk>
	I0815 00:20:38.476511   30723 main.go:141] libmachine: (ha-863044)     <disk type='file' device='disk'>
	I0815 00:20:38.476534   30723 main.go:141] libmachine: (ha-863044)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 00:20:38.476552   30723 main.go:141] libmachine: (ha-863044)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/ha-863044.rawdisk'/>
	I0815 00:20:38.476561   30723 main.go:141] libmachine: (ha-863044)       <target dev='hda' bus='virtio'/>
	I0815 00:20:38.476571   30723 main.go:141] libmachine: (ha-863044)     </disk>
	I0815 00:20:38.476581   30723 main.go:141] libmachine: (ha-863044)     <interface type='network'>
	I0815 00:20:38.476592   30723 main.go:141] libmachine: (ha-863044)       <source network='mk-ha-863044'/>
	I0815 00:20:38.476602   30723 main.go:141] libmachine: (ha-863044)       <model type='virtio'/>
	I0815 00:20:38.476619   30723 main.go:141] libmachine: (ha-863044)     </interface>
	I0815 00:20:38.476630   30723 main.go:141] libmachine: (ha-863044)     <interface type='network'>
	I0815 00:20:38.476666   30723 main.go:141] libmachine: (ha-863044)       <source network='default'/>
	I0815 00:20:38.476692   30723 main.go:141] libmachine: (ha-863044)       <model type='virtio'/>
	I0815 00:20:38.476703   30723 main.go:141] libmachine: (ha-863044)     </interface>
	I0815 00:20:38.476714   30723 main.go:141] libmachine: (ha-863044)     <serial type='pty'>
	I0815 00:20:38.476722   30723 main.go:141] libmachine: (ha-863044)       <target port='0'/>
	I0815 00:20:38.476732   30723 main.go:141] libmachine: (ha-863044)     </serial>
	I0815 00:20:38.476742   30723 main.go:141] libmachine: (ha-863044)     <console type='pty'>
	I0815 00:20:38.476753   30723 main.go:141] libmachine: (ha-863044)       <target type='serial' port='0'/>
	I0815 00:20:38.476763   30723 main.go:141] libmachine: (ha-863044)     </console>
	I0815 00:20:38.476773   30723 main.go:141] libmachine: (ha-863044)     <rng model='virtio'>
	I0815 00:20:38.476784   30723 main.go:141] libmachine: (ha-863044)       <backend model='random'>/dev/random</backend>
	I0815 00:20:38.476793   30723 main.go:141] libmachine: (ha-863044)     </rng>
	I0815 00:20:38.476801   30723 main.go:141] libmachine: (ha-863044)     
	I0815 00:20:38.476809   30723 main.go:141] libmachine: (ha-863044)     
	I0815 00:20:38.476818   30723 main.go:141] libmachine: (ha-863044)   </devices>
	I0815 00:20:38.476850   30723 main.go:141] libmachine: (ha-863044) </domain>
	I0815 00:20:38.476861   30723 main.go:141] libmachine: (ha-863044) 
	I0815 00:20:38.480820   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:d5:7c:0d in network default
	I0815 00:20:38.481370   30723 main.go:141] libmachine: (ha-863044) Ensuring networks are active...
	I0815 00:20:38.481385   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:38.482026   30723 main.go:141] libmachine: (ha-863044) Ensuring network default is active
	I0815 00:20:38.482381   30723 main.go:141] libmachine: (ha-863044) Ensuring network mk-ha-863044 is active
	I0815 00:20:38.482835   30723 main.go:141] libmachine: (ha-863044) Getting domain xml...
	I0815 00:20:38.483552   30723 main.go:141] libmachine: (ha-863044) Creating domain...
	I0815 00:20:39.661795   30723 main.go:141] libmachine: (ha-863044) Waiting to get IP...
	I0815 00:20:39.662578   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:39.662947   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:39.662977   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:39.662923   30746 retry.go:31] will retry after 276.183296ms: waiting for machine to come up
	I0815 00:20:39.940317   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:39.940830   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:39.940854   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:39.940780   30746 retry.go:31] will retry after 340.971065ms: waiting for machine to come up
	I0815 00:20:40.283459   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:40.283896   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:40.283923   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:40.283849   30746 retry.go:31] will retry after 409.225445ms: waiting for machine to come up
	I0815 00:20:40.694512   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:40.694967   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:40.694995   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:40.694914   30746 retry.go:31] will retry after 440.059085ms: waiting for machine to come up
	I0815 00:20:41.136412   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:41.136843   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:41.136870   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:41.136804   30746 retry.go:31] will retry after 677.697429ms: waiting for machine to come up
	I0815 00:20:41.815715   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:41.816087   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:41.816111   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:41.816049   30746 retry.go:31] will retry after 694.446796ms: waiting for machine to come up
	I0815 00:20:42.511865   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:42.512309   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:42.512343   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:42.512273   30746 retry.go:31] will retry after 1.147726516s: waiting for machine to come up
	I0815 00:20:43.661329   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:43.661883   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:43.661913   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:43.661833   30746 retry.go:31] will retry after 1.094040829s: waiting for machine to come up
	I0815 00:20:44.757629   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:44.758099   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:44.758128   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:44.758042   30746 retry.go:31] will retry after 1.277852484s: waiting for machine to come up
	I0815 00:20:46.037289   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:46.037687   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:46.037735   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:46.037659   30746 retry.go:31] will retry after 1.561255826s: waiting for machine to come up
	I0815 00:20:47.601481   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:47.601960   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:47.601989   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:47.601914   30746 retry.go:31] will retry after 2.267168102s: waiting for machine to come up
	I0815 00:20:49.871062   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:49.871453   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:49.871481   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:49.871403   30746 retry.go:31] will retry after 2.480250796s: waiting for machine to come up
	I0815 00:20:52.354878   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:52.355276   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:52.355319   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:52.355209   30746 retry.go:31] will retry after 4.383643095s: waiting for machine to come up
	I0815 00:20:56.742910   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:56.743240   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:56.743266   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:56.743189   30746 retry.go:31] will retry after 5.191918682s: waiting for machine to come up
	I0815 00:21:01.937574   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:01.938054   30723 main.go:141] libmachine: (ha-863044) Found IP for machine: 192.168.39.6
	I0815 00:21:01.938082   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has current primary IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:01.938091   30723 main.go:141] libmachine: (ha-863044) Reserving static IP address...
	I0815 00:21:01.938429   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find host DHCP lease matching {name: "ha-863044", mac: "52:54:00:32:35:5d", ip: "192.168.39.6"} in network mk-ha-863044
	I0815 00:21:02.005802   30723 main.go:141] libmachine: (ha-863044) DBG | Getting to WaitForSSH function...
	I0815 00:21:02.005829   30723 main.go:141] libmachine: (ha-863044) Reserved static IP address: 192.168.39.6
	I0815 00:21:02.005843   30723 main.go:141] libmachine: (ha-863044) Waiting for SSH to be available...
	I0815 00:21:02.008469   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.008856   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:minikube Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.008879   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.009038   30723 main.go:141] libmachine: (ha-863044) DBG | Using SSH client type: external
	I0815 00:21:02.009061   30723 main.go:141] libmachine: (ha-863044) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa (-rw-------)
	I0815 00:21:02.009100   30723 main.go:141] libmachine: (ha-863044) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 00:21:02.009115   30723 main.go:141] libmachine: (ha-863044) DBG | About to run SSH command:
	I0815 00:21:02.009127   30723 main.go:141] libmachine: (ha-863044) DBG | exit 0
	I0815 00:21:02.136234   30723 main.go:141] libmachine: (ha-863044) DBG | SSH cmd err, output: <nil>: 
	I0815 00:21:02.136566   30723 main.go:141] libmachine: (ha-863044) KVM machine creation complete!
	I0815 00:21:02.136837   30723 main.go:141] libmachine: (ha-863044) Calling .GetConfigRaw
	I0815 00:21:02.137355   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:02.137542   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:02.137718   30723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 00:21:02.137737   30723 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:21:02.138897   30723 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 00:21:02.138909   30723 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 00:21:02.138914   30723 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 00:21:02.138920   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.140964   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.141278   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.141316   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.141373   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:02.141518   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.141679   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.141849   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:02.142002   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:02.142176   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:21:02.142185   30723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 00:21:02.251373   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:21:02.251394   30723 main.go:141] libmachine: Detecting the provisioner...
	I0815 00:21:02.251401   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.253724   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.254037   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.254065   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.254187   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:02.254356   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.254518   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.254662   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:02.254902   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:02.255082   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:21:02.255092   30723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 00:21:02.364705   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 00:21:02.364790   30723 main.go:141] libmachine: found compatible host: buildroot
	I0815 00:21:02.364801   30723 main.go:141] libmachine: Provisioning with buildroot...
	I0815 00:21:02.364808   30723 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:21:02.365023   30723 buildroot.go:166] provisioning hostname "ha-863044"
	I0815 00:21:02.365045   30723 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:21:02.365234   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.367819   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.368129   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.368147   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.368314   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:02.368539   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.368686   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.368797   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:02.368929   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:02.369080   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:21:02.369091   30723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863044 && echo "ha-863044" | sudo tee /etc/hostname
	I0815 00:21:02.494243   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863044
	
	I0815 00:21:02.494268   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.497012   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.497355   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.497386   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.497557   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:02.497720   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.497856   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.497991   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:02.498199   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:02.498412   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:21:02.498431   30723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863044/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:21:02.616370   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:21:02.616398   30723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 00:21:02.616417   30723 buildroot.go:174] setting up certificates
	I0815 00:21:02.616425   30723 provision.go:84] configureAuth start
	I0815 00:21:02.616433   30723 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:21:02.616703   30723 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:21:02.619259   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.619551   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.619574   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.619707   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.621625   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.621917   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.621940   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.622036   30723 provision.go:143] copyHostCerts
	I0815 00:21:02.622065   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:21:02.622090   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 00:21:02.622105   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:21:02.622168   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 00:21:02.622264   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:21:02.622283   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 00:21:02.622289   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:21:02.622315   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 00:21:02.622376   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:21:02.622391   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 00:21:02.622395   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:21:02.622416   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 00:21:02.622472   30723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.ha-863044 san=[127.0.0.1 192.168.39.6 ha-863044 localhost minikube]
	I0815 00:21:02.682385   30723 provision.go:177] copyRemoteCerts
	I0815 00:21:02.682445   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:21:02.682469   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.684881   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.685194   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.685216   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.685396   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:02.685565   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.685694   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:02.685821   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:02.770543   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 00:21:02.770622   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:21:02.792790   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 00:21:02.792855   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 00:21:02.815892   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 00:21:02.815971   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 00:21:02.837522   30723 provision.go:87] duration metric: took 221.084548ms to configureAuth
	I0815 00:21:02.837555   30723 buildroot.go:189] setting minikube options for container-runtime
	I0815 00:21:02.837712   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:21:02.837781   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.840096   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.840433   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.840458   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.840559   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:02.840739   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.840893   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.841013   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:02.841119   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:02.841304   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:21:02.841325   30723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:21:03.101543   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:21:03.101578   30723 main.go:141] libmachine: Checking connection to Docker...
	I0815 00:21:03.101589   30723 main.go:141] libmachine: (ha-863044) Calling .GetURL
	I0815 00:21:03.103042   30723 main.go:141] libmachine: (ha-863044) DBG | Using libvirt version 6000000
	I0815 00:21:03.105226   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.105597   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.105638   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.105935   30723 main.go:141] libmachine: Docker is up and running!
	I0815 00:21:03.105948   30723 main.go:141] libmachine: Reticulating splines...
	I0815 00:21:03.105954   30723 client.go:171] duration metric: took 25.136652037s to LocalClient.Create
	I0815 00:21:03.105976   30723 start.go:167] duration metric: took 25.136714259s to libmachine.API.Create "ha-863044"
	I0815 00:21:03.105990   30723 start.go:293] postStartSetup for "ha-863044" (driver="kvm2")
	I0815 00:21:03.106001   30723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:21:03.106024   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:03.106229   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:21:03.106252   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:03.108382   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.108765   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.108797   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.108909   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:03.109070   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:03.109213   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:03.109423   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:03.194188   30723 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:21:03.198338   30723 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 00:21:03.198369   30723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 00:21:03.198449   30723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 00:21:03.198542   30723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 00:21:03.198554   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /etc/ssl/certs/202792.pem
	I0815 00:21:03.198643   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 00:21:03.207701   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:21:03.228996   30723 start.go:296] duration metric: took 122.994267ms for postStartSetup
	I0815 00:21:03.229035   30723 main.go:141] libmachine: (ha-863044) Calling .GetConfigRaw
	I0815 00:21:03.229627   30723 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:21:03.232115   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.232410   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.232435   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.232756   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:21:03.232953   30723 start.go:128] duration metric: took 25.280860151s to createHost
	I0815 00:21:03.232975   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:03.235077   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.235386   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.235412   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.235519   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:03.235689   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:03.235842   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:03.235958   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:03.236077   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:03.236256   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:21:03.236279   30723 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 00:21:03.344631   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723681263.324116942
	
	I0815 00:21:03.344665   30723 fix.go:216] guest clock: 1723681263.324116942
	I0815 00:21:03.344674   30723 fix.go:229] Guest: 2024-08-15 00:21:03.324116942 +0000 UTC Remote: 2024-08-15 00:21:03.232965678 +0000 UTC m=+25.385115084 (delta=91.151264ms)
	I0815 00:21:03.344710   30723 fix.go:200] guest clock delta is within tolerance: 91.151264ms
	I0815 00:21:03.344720   30723 start.go:83] releasing machines lock for "ha-863044", held for 25.392691668s
	I0815 00:21:03.344743   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:03.345004   30723 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:21:03.347482   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.347795   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.347821   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.347923   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:03.348404   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:03.348551   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:03.348648   30723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:21:03.348715   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:03.348723   30723 ssh_runner.go:195] Run: cat /version.json
	I0815 00:21:03.348737   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:03.350881   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.351228   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.351255   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.351278   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.351320   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:03.351512   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:03.351569   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.351594   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.351655   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:03.351721   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:03.351797   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:03.351869   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:03.351967   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:03.352115   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:03.433522   30723 ssh_runner.go:195] Run: systemctl --version
	I0815 00:21:03.466093   30723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:21:03.619012   30723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 00:21:03.624678   30723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 00:21:03.624728   30723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:21:03.640029   30723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 00:21:03.640044   30723 start.go:495] detecting cgroup driver to use...
	I0815 00:21:03.640090   30723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:21:03.655169   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:21:03.667440   30723 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:21:03.667479   30723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:21:03.679720   30723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:21:03.692116   30723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:21:03.801801   30723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:21:03.955057   30723 docker.go:233] disabling docker service ...
	I0815 00:21:03.955114   30723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:21:03.968149   30723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:21:03.979905   30723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:21:04.095537   30723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:21:04.216331   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:21:04.230503   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:21:04.247875   30723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:21:04.247944   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.258217   30723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:21:04.258281   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.267758   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.276984   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.285989   30723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:21:04.295369   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.304416   30723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.319501   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.329626   30723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:21:04.338999   30723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 00:21:04.339049   30723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 00:21:04.351366   30723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:21:04.360028   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:21:04.472934   30723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:21:04.607975   30723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:21:04.608063   30723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:21:04.613012   30723 start.go:563] Will wait 60s for crictl version
	I0815 00:21:04.613054   30723 ssh_runner.go:195] Run: which crictl
	I0815 00:21:04.616396   30723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:21:04.656063   30723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 00:21:04.656156   30723 ssh_runner.go:195] Run: crio --version
	I0815 00:21:04.686776   30723 ssh_runner.go:195] Run: crio --version
	I0815 00:21:04.717881   30723 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 00:21:04.718992   30723 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:21:04.721533   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:04.721792   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:04.721824   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:04.721999   30723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 00:21:04.725839   30723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:21:04.739377   30723 kubeadm.go:883] updating cluster {Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:21:04.739515   30723 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:21:04.739573   30723 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:21:04.773569   30723 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 00:21:04.773642   30723 ssh_runner.go:195] Run: which lz4
	I0815 00:21:04.777366   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0815 00:21:04.777466   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 00:21:04.781342   30723 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 00:21:04.781373   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 00:21:05.969606   30723 crio.go:462] duration metric: took 1.192161234s to copy over tarball
	I0815 00:21:05.969672   30723 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 00:21:07.918007   30723 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.948304703s)
	I0815 00:21:07.918039   30723 crio.go:469] duration metric: took 1.948406345s to extract the tarball
	I0815 00:21:07.918049   30723 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 00:21:07.954630   30723 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:21:07.995361   30723 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:21:07.995385   30723 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:21:07.995394   30723 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.31.0 crio true true} ...
	I0815 00:21:07.995513   30723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863044 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:21:07.995608   30723 ssh_runner.go:195] Run: crio config
	I0815 00:21:08.039497   30723 cni.go:84] Creating CNI manager for ""
	I0815 00:21:08.039518   30723 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 00:21:08.039528   30723 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:21:08.039555   30723 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-863044 NodeName:ha-863044 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:21:08.039677   30723 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-863044"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:21:08.039698   30723 kube-vip.go:115] generating kube-vip config ...
	I0815 00:21:08.039740   30723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 00:21:08.054395   30723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 00:21:08.054570   30723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0815 00:21:08.054629   30723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:21:08.064446   30723 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:21:08.064522   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 00:21:08.072777   30723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0815 00:21:08.086979   30723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:21:08.101588   30723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0815 00:21:08.115839   30723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0815 00:21:08.129970   30723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 00:21:08.133232   30723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:21:08.143442   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:21:08.249523   30723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:21:08.265025   30723 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044 for IP: 192.168.39.6
	I0815 00:21:08.265041   30723 certs.go:194] generating shared ca certs ...
	I0815 00:21:08.265058   30723 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.265234   30723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 00:21:08.265302   30723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 00:21:08.265317   30723 certs.go:256] generating profile certs ...
	I0815 00:21:08.265386   30723 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key
	I0815 00:21:08.265402   30723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.crt with IP's: []
	I0815 00:21:08.485903   30723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.crt ...
	I0815 00:21:08.485937   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.crt: {Name:mk852256948a32d4c87a5e18722bfc8c23ec9719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.486136   30723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key ...
	I0815 00:21:08.486150   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key: {Name:mk1a22c6ac652160a7de25f3603d049244701baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.486254   30723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.1b81b6e8
	I0815 00:21:08.486273   30723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.1b81b6e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.254]
	I0815 00:21:08.567621   30723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.1b81b6e8 ...
	I0815 00:21:08.567652   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.1b81b6e8: {Name:mk14b63d91ccee3ec4cca025aabfdc68aaf70a88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.567825   30723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.1b81b6e8 ...
	I0815 00:21:08.567840   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.1b81b6e8: {Name:mkbbc89093724d7eaf1c152c604b902a33bb344d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.567934   30723 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.1b81b6e8 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt
	I0815 00:21:08.568040   30723 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.1b81b6e8 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key
	I0815 00:21:08.568125   30723 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key
	I0815 00:21:08.568144   30723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt with IP's: []
	I0815 00:21:08.703605   30723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt ...
	I0815 00:21:08.703635   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt: {Name:mkac43649e9a87f80a604ef4572c3441e99afc63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.703802   30723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key ...
	I0815 00:21:08.703815   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key: {Name:mk20457fff8d55d19661ee46633906c40d27707f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.703909   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 00:21:08.703931   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 00:21:08.703947   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 00:21:08.703966   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 00:21:08.703984   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 00:21:08.704002   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 00:21:08.704018   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 00:21:08.704035   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 00:21:08.704097   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 00:21:08.704142   30723 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 00:21:08.704155   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:21:08.704188   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:21:08.704221   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:21:08.704254   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 00:21:08.704308   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:21:08.704354   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem -> /usr/share/ca-certificates/20279.pem
	I0815 00:21:08.704382   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /usr/share/ca-certificates/202792.pem
	I0815 00:21:08.704399   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:08.704965   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:21:08.728196   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:21:08.749792   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:21:08.770633   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:21:08.791470   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 00:21:08.812005   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:21:08.833554   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:21:08.854902   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 00:21:08.875847   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 00:21:08.896234   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 00:21:08.917767   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:21:08.938830   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:21:08.953609   30723 ssh_runner.go:195] Run: openssl version
	I0815 00:21:08.958681   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 00:21:08.967911   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 00:21:08.971605   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 00:21:08.971644   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 00:21:08.976665   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 00:21:08.985881   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 00:21:08.995036   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 00:21:08.998744   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 00:21:08.998805   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 00:21:09.003785   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 00:21:09.015972   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:21:09.031850   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:09.036378   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:09.036442   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:09.043443   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:21:09.057791   30723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:21:09.062303   30723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:21:09.062347   30723 kubeadm.go:392] StartCluster: {Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:21:09.062415   30723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 00:21:09.062473   30723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:21:09.105164   30723 cri.go:89] found id: ""
	I0815 00:21:09.105237   30723 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 00:21:09.114154   30723 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 00:21:09.122721   30723 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 00:21:09.131084   30723 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 00:21:09.131101   30723 kubeadm.go:157] found existing configuration files:
	
	I0815 00:21:09.131144   30723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 00:21:09.139015   30723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 00:21:09.139074   30723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 00:21:09.147288   30723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 00:21:09.155460   30723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 00:21:09.155514   30723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 00:21:09.163812   30723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 00:21:09.172354   30723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 00:21:09.172393   30723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 00:21:09.180596   30723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 00:21:09.188410   30723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 00:21:09.188462   30723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 00:21:09.196726   30723 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 00:21:09.298392   30723 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 00:21:09.298493   30723 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 00:21:09.390465   30723 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 00:21:09.390578   30723 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 00:21:09.390720   30723 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 00:21:09.400023   30723 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 00:21:09.402780   30723 out.go:204]   - Generating certificates and keys ...
	I0815 00:21:09.402867   30723 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 00:21:09.402924   30723 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 00:21:09.726623   30723 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 00:21:09.822504   30723 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 00:21:09.906086   30723 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 00:21:10.322395   30723 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 00:21:10.435919   30723 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 00:21:10.436076   30723 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-863044 localhost] and IPs [192.168.39.6 127.0.0.1 ::1]
	I0815 00:21:10.824872   30723 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 00:21:10.825171   30723 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-863044 localhost] and IPs [192.168.39.6 127.0.0.1 ::1]
	I0815 00:21:10.943003   30723 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 00:21:11.019310   30723 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 00:21:11.180466   30723 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 00:21:11.180742   30723 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 00:21:11.526821   30723 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 00:21:11.916049   30723 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 00:21:12.107671   30723 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 00:21:12.205597   30723 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 00:21:12.311189   30723 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 00:21:12.311883   30723 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 00:21:12.315179   30723 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 00:21:12.356392   30723 out.go:204]   - Booting up control plane ...
	I0815 00:21:12.356554   30723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 00:21:12.356740   30723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 00:21:12.356854   30723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 00:21:12.357050   30723 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 00:21:12.357176   30723 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 00:21:12.357257   30723 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 00:21:12.486137   30723 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 00:21:12.486285   30723 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 00:21:12.987140   30723 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.454775ms
	I0815 00:21:12.987229   30723 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 00:21:18.942567   30723 kubeadm.go:310] [api-check] The API server is healthy after 5.958117383s
	I0815 00:21:18.954188   30723 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 00:21:18.966016   30723 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 00:21:19.498724   30723 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 00:21:19.498879   30723 kubeadm.go:310] [mark-control-plane] Marking the node ha-863044 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 00:21:19.509045   30723 kubeadm.go:310] [bootstrap-token] Using token: 3imy80.4d17q2wqt4vy2b7n
	I0815 00:21:19.510302   30723 out.go:204]   - Configuring RBAC rules ...
	I0815 00:21:19.510411   30723 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 00:21:19.519698   30723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 00:21:19.530551   30723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 00:21:19.536265   30723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 00:21:19.540018   30723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 00:21:19.543648   30723 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 00:21:19.560630   30723 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 00:21:19.784650   30723 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 00:21:20.349712   30723 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 00:21:20.349732   30723 kubeadm.go:310] 
	I0815 00:21:20.349804   30723 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 00:21:20.349812   30723 kubeadm.go:310] 
	I0815 00:21:20.349914   30723 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 00:21:20.349934   30723 kubeadm.go:310] 
	I0815 00:21:20.349960   30723 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 00:21:20.350022   30723 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 00:21:20.350098   30723 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 00:21:20.350108   30723 kubeadm.go:310] 
	I0815 00:21:20.350182   30723 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 00:21:20.350192   30723 kubeadm.go:310] 
	I0815 00:21:20.350251   30723 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 00:21:20.350261   30723 kubeadm.go:310] 
	I0815 00:21:20.350323   30723 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 00:21:20.350431   30723 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 00:21:20.350520   30723 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 00:21:20.350529   30723 kubeadm.go:310] 
	I0815 00:21:20.350648   30723 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 00:21:20.350757   30723 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 00:21:20.350773   30723 kubeadm.go:310] 
	I0815 00:21:20.350876   30723 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3imy80.4d17q2wqt4vy2b7n \
	I0815 00:21:20.351020   30723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 00:21:20.351050   30723 kubeadm.go:310] 	--control-plane 
	I0815 00:21:20.351058   30723 kubeadm.go:310] 
	I0815 00:21:20.351178   30723 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 00:21:20.351188   30723 kubeadm.go:310] 
	I0815 00:21:20.351297   30723 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3imy80.4d17q2wqt4vy2b7n \
	I0815 00:21:20.351436   30723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 00:21:20.352164   30723 kubeadm.go:310] W0815 00:21:09.278548     852 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:21:20.352563   30723 kubeadm.go:310] W0815 00:21:09.281384     852 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:21:20.352720   30723 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 00:21:20.352734   30723 cni.go:84] Creating CNI manager for ""
	I0815 00:21:20.352740   30723 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 00:21:20.354542   30723 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 00:21:20.355895   30723 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 00:21:20.360879   30723 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 00:21:20.360897   30723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 00:21:20.380859   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 00:21:20.736755   30723 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 00:21:20.736885   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-863044 minikube.k8s.io/updated_at=2024_08_15T00_21_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=ha-863044 minikube.k8s.io/primary=true
	I0815 00:21:20.736895   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:20.762239   30723 ops.go:34] apiserver oom_adj: -16
	I0815 00:21:20.891351   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:21.391488   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:21.891422   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:22.392140   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:22.892374   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:23.391423   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:23.892112   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:24.391692   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:24.515446   30723 kubeadm.go:1113] duration metric: took 3.778599805s to wait for elevateKubeSystemPrivileges
	I0815 00:21:24.515482   30723 kubeadm.go:394] duration metric: took 15.453137418s to StartCluster
	I0815 00:21:24.515502   30723 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:24.515571   30723 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:21:24.516397   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:24.516624   30723 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:21:24.516674   30723 start.go:241] waiting for startup goroutines ...
	I0815 00:21:24.516638   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 00:21:24.516672   30723 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 00:21:24.516753   30723 addons.go:69] Setting storage-provisioner=true in profile "ha-863044"
	I0815 00:21:24.516783   30723 addons.go:234] Setting addon storage-provisioner=true in "ha-863044"
	I0815 00:21:24.516782   30723 addons.go:69] Setting default-storageclass=true in profile "ha-863044"
	I0815 00:21:24.516812   30723 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-863044"
	I0815 00:21:24.516839   30723 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:21:24.517236   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:21:24.517312   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:24.517341   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:24.517417   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:24.517489   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:24.531778   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33039
	I0815 00:21:24.532107   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0815 00:21:24.532181   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:24.532554   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:24.532726   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:24.532749   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:24.533066   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:24.533083   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:24.533101   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:24.533293   30723 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:21:24.533377   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:24.533932   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:24.533961   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:24.535328   30723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:21:24.535676   30723 kapi.go:59] client config for ha-863044: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.crt", KeyFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key", CAFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 00:21:24.536188   30723 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 00:21:24.536478   30723 addons.go:234] Setting addon default-storageclass=true in "ha-863044"
	I0815 00:21:24.536519   30723 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:21:24.536896   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:24.536938   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:24.549472   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0815 00:21:24.549944   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:24.550465   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:24.550490   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:24.550732   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
	I0815 00:21:24.550815   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:24.550974   30723 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:21:24.551148   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:24.551573   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:24.551595   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:24.551893   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:24.552322   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:24.552362   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:24.552586   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:24.554346   30723 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 00:21:24.555673   30723 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:21:24.555691   30723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 00:21:24.555712   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:24.558336   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:24.558682   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:24.558698   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:24.558836   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:24.558999   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:24.559168   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:24.559279   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:24.567350   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0815 00:21:24.567673   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:24.568039   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:24.568052   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:24.568338   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:24.568453   30723 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:21:24.570006   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:24.570189   30723 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 00:21:24.570202   30723 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 00:21:24.570218   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:24.572529   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:24.572873   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:24.572894   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:24.573005   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:24.573166   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:24.573302   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:24.573420   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:24.687201   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 00:21:24.734629   30723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 00:21:24.758862   30723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:21:25.147476   30723 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 00:21:25.147499   30723 main.go:141] libmachine: Making call to close driver server
	I0815 00:21:25.147511   30723 main.go:141] libmachine: (ha-863044) Calling .Close
	I0815 00:21:25.147794   30723 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:21:25.147810   30723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:21:25.147817   30723 main.go:141] libmachine: Making call to close driver server
	I0815 00:21:25.147824   30723 main.go:141] libmachine: (ha-863044) Calling .Close
	I0815 00:21:25.147828   30723 main.go:141] libmachine: (ha-863044) DBG | Closing plugin on server side
	I0815 00:21:25.148020   30723 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:21:25.148028   30723 main.go:141] libmachine: (ha-863044) DBG | Closing plugin on server side
	I0815 00:21:25.148032   30723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:21:25.148084   30723 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 00:21:25.148102   30723 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 00:21:25.148183   30723 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0815 00:21:25.148193   30723 round_trippers.go:469] Request Headers:
	I0815 00:21:25.148204   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:21:25.148211   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:21:25.155664   30723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 00:21:25.156491   30723 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0815 00:21:25.156510   30723 round_trippers.go:469] Request Headers:
	I0815 00:21:25.156524   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:21:25.156537   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:21:25.156543   30723 round_trippers.go:473]     Content-Type: application/json
	I0815 00:21:25.158831   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:21:25.158952   30723 main.go:141] libmachine: Making call to close driver server
	I0815 00:21:25.158969   30723 main.go:141] libmachine: (ha-863044) Calling .Close
	I0815 00:21:25.159178   30723 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:21:25.159193   30723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:21:25.159204   30723 main.go:141] libmachine: (ha-863044) DBG | Closing plugin on server side
	I0815 00:21:25.352704   30723 main.go:141] libmachine: Making call to close driver server
	I0815 00:21:25.352732   30723 main.go:141] libmachine: (ha-863044) Calling .Close
	I0815 00:21:25.353023   30723 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:21:25.353044   30723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:21:25.353055   30723 main.go:141] libmachine: Making call to close driver server
	I0815 00:21:25.353064   30723 main.go:141] libmachine: (ha-863044) Calling .Close
	I0815 00:21:25.353257   30723 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:21:25.353270   30723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:21:25.354961   30723 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0815 00:21:25.356150   30723 addons.go:510] duration metric: took 839.496754ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0815 00:21:25.356179   30723 start.go:246] waiting for cluster config update ...
	I0815 00:21:25.356194   30723 start.go:255] writing updated cluster config ...
	I0815 00:21:25.357847   30723 out.go:177] 
	I0815 00:21:25.359883   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:21:25.359959   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:21:25.361619   30723 out.go:177] * Starting "ha-863044-m02" control-plane node in "ha-863044" cluster
	I0815 00:21:25.362824   30723 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:21:25.362844   30723 cache.go:56] Caching tarball of preloaded images
	I0815 00:21:25.362930   30723 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:21:25.362944   30723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:21:25.363037   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:21:25.363202   30723 start.go:360] acquireMachinesLock for ha-863044-m02: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 00:21:25.363249   30723 start.go:364] duration metric: took 25.831µs to acquireMachinesLock for "ha-863044-m02"
	I0815 00:21:25.363275   30723 start.go:93] Provisioning new machine with config: &{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:21:25.363366   30723 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0815 00:21:25.364976   30723 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 00:21:25.365059   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:25.365089   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:25.380676   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0815 00:21:25.381123   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:25.381622   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:25.381646   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:25.381933   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:25.382107   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetMachineName
	I0815 00:21:25.382236   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:25.382380   30723 start.go:159] libmachine.API.Create for "ha-863044" (driver="kvm2")
	I0815 00:21:25.382401   30723 client.go:168] LocalClient.Create starting
	I0815 00:21:25.382441   30723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem
	I0815 00:21:25.382469   30723 main.go:141] libmachine: Decoding PEM data...
	I0815 00:21:25.382482   30723 main.go:141] libmachine: Parsing certificate...
	I0815 00:21:25.382528   30723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem
	I0815 00:21:25.382548   30723 main.go:141] libmachine: Decoding PEM data...
	I0815 00:21:25.382564   30723 main.go:141] libmachine: Parsing certificate...
	I0815 00:21:25.382585   30723 main.go:141] libmachine: Running pre-create checks...
	I0815 00:21:25.382596   30723 main.go:141] libmachine: (ha-863044-m02) Calling .PreCreateCheck
	I0815 00:21:25.382893   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetConfigRaw
	I0815 00:21:25.383289   30723 main.go:141] libmachine: Creating machine...
	I0815 00:21:25.383302   30723 main.go:141] libmachine: (ha-863044-m02) Calling .Create
	I0815 00:21:25.383460   30723 main.go:141] libmachine: (ha-863044-m02) Creating KVM machine...
	I0815 00:21:25.384763   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found existing default KVM network
	I0815 00:21:25.384935   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found existing private KVM network mk-ha-863044
	I0815 00:21:25.385100   30723 main.go:141] libmachine: (ha-863044-m02) Setting up store path in /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02 ...
	I0815 00:21:25.385119   30723 main.go:141] libmachine: (ha-863044-m02) Building disk image from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 00:21:25.385218   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:25.385110   31086 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:21:25.385309   30723 main.go:141] libmachine: (ha-863044-m02) Downloading /home/jenkins/minikube-integration/19443-13088/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 00:21:25.650654   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:25.650540   31086 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa...
	I0815 00:21:25.806017   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:25.805904   31086 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/ha-863044-m02.rawdisk...
	I0815 00:21:25.806049   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Writing magic tar header
	I0815 00:21:25.806070   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Writing SSH key tar header
	I0815 00:21:25.806084   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:25.806051   31086 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02 ...
	I0815 00:21:25.806226   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02
	I0815 00:21:25.806252   30723 main.go:141] libmachine: (ha-863044-m02) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02 (perms=drwx------)
	I0815 00:21:25.806264   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines
	I0815 00:21:25.806280   30723 main.go:141] libmachine: (ha-863044-m02) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines (perms=drwxr-xr-x)
	I0815 00:21:25.806294   30723 main.go:141] libmachine: (ha-863044-m02) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube (perms=drwxr-xr-x)
	I0815 00:21:25.806301   30723 main.go:141] libmachine: (ha-863044-m02) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088 (perms=drwxrwxr-x)
	I0815 00:21:25.806310   30723 main.go:141] libmachine: (ha-863044-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 00:21:25.806319   30723 main.go:141] libmachine: (ha-863044-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 00:21:25.806329   30723 main.go:141] libmachine: (ha-863044-m02) Creating domain...
	I0815 00:21:25.806343   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:21:25.806357   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088
	I0815 00:21:25.806370   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 00:21:25.806381   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home/jenkins
	I0815 00:21:25.806390   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home
	I0815 00:21:25.806396   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Skipping /home - not owner
	I0815 00:21:25.807283   30723 main.go:141] libmachine: (ha-863044-m02) define libvirt domain using xml: 
	I0815 00:21:25.807302   30723 main.go:141] libmachine: (ha-863044-m02) <domain type='kvm'>
	I0815 00:21:25.807311   30723 main.go:141] libmachine: (ha-863044-m02)   <name>ha-863044-m02</name>
	I0815 00:21:25.807327   30723 main.go:141] libmachine: (ha-863044-m02)   <memory unit='MiB'>2200</memory>
	I0815 00:21:25.807336   30723 main.go:141] libmachine: (ha-863044-m02)   <vcpu>2</vcpu>
	I0815 00:21:25.807344   30723 main.go:141] libmachine: (ha-863044-m02)   <features>
	I0815 00:21:25.807352   30723 main.go:141] libmachine: (ha-863044-m02)     <acpi/>
	I0815 00:21:25.807366   30723 main.go:141] libmachine: (ha-863044-m02)     <apic/>
	I0815 00:21:25.807378   30723 main.go:141] libmachine: (ha-863044-m02)     <pae/>
	I0815 00:21:25.807401   30723 main.go:141] libmachine: (ha-863044-m02)     
	I0815 00:21:25.807413   30723 main.go:141] libmachine: (ha-863044-m02)   </features>
	I0815 00:21:25.807423   30723 main.go:141] libmachine: (ha-863044-m02)   <cpu mode='host-passthrough'>
	I0815 00:21:25.807431   30723 main.go:141] libmachine: (ha-863044-m02)   
	I0815 00:21:25.807442   30723 main.go:141] libmachine: (ha-863044-m02)   </cpu>
	I0815 00:21:25.807450   30723 main.go:141] libmachine: (ha-863044-m02)   <os>
	I0815 00:21:25.807461   30723 main.go:141] libmachine: (ha-863044-m02)     <type>hvm</type>
	I0815 00:21:25.807471   30723 main.go:141] libmachine: (ha-863044-m02)     <boot dev='cdrom'/>
	I0815 00:21:25.807481   30723 main.go:141] libmachine: (ha-863044-m02)     <boot dev='hd'/>
	I0815 00:21:25.807491   30723 main.go:141] libmachine: (ha-863044-m02)     <bootmenu enable='no'/>
	I0815 00:21:25.807525   30723 main.go:141] libmachine: (ha-863044-m02)   </os>
	I0815 00:21:25.807551   30723 main.go:141] libmachine: (ha-863044-m02)   <devices>
	I0815 00:21:25.807569   30723 main.go:141] libmachine: (ha-863044-m02)     <disk type='file' device='cdrom'>
	I0815 00:21:25.807589   30723 main.go:141] libmachine: (ha-863044-m02)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/boot2docker.iso'/>
	I0815 00:21:25.807602   30723 main.go:141] libmachine: (ha-863044-m02)       <target dev='hdc' bus='scsi'/>
	I0815 00:21:25.807610   30723 main.go:141] libmachine: (ha-863044-m02)       <readonly/>
	I0815 00:21:25.807619   30723 main.go:141] libmachine: (ha-863044-m02)     </disk>
	I0815 00:21:25.807631   30723 main.go:141] libmachine: (ha-863044-m02)     <disk type='file' device='disk'>
	I0815 00:21:25.807646   30723 main.go:141] libmachine: (ha-863044-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 00:21:25.807661   30723 main.go:141] libmachine: (ha-863044-m02)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/ha-863044-m02.rawdisk'/>
	I0815 00:21:25.807674   30723 main.go:141] libmachine: (ha-863044-m02)       <target dev='hda' bus='virtio'/>
	I0815 00:21:25.807688   30723 main.go:141] libmachine: (ha-863044-m02)     </disk>
	I0815 00:21:25.807700   30723 main.go:141] libmachine: (ha-863044-m02)     <interface type='network'>
	I0815 00:21:25.807726   30723 main.go:141] libmachine: (ha-863044-m02)       <source network='mk-ha-863044'/>
	I0815 00:21:25.807747   30723 main.go:141] libmachine: (ha-863044-m02)       <model type='virtio'/>
	I0815 00:21:25.807762   30723 main.go:141] libmachine: (ha-863044-m02)     </interface>
	I0815 00:21:25.807781   30723 main.go:141] libmachine: (ha-863044-m02)     <interface type='network'>
	I0815 00:21:25.807795   30723 main.go:141] libmachine: (ha-863044-m02)       <source network='default'/>
	I0815 00:21:25.807806   30723 main.go:141] libmachine: (ha-863044-m02)       <model type='virtio'/>
	I0815 00:21:25.807815   30723 main.go:141] libmachine: (ha-863044-m02)     </interface>
	I0815 00:21:25.807830   30723 main.go:141] libmachine: (ha-863044-m02)     <serial type='pty'>
	I0815 00:21:25.807841   30723 main.go:141] libmachine: (ha-863044-m02)       <target port='0'/>
	I0815 00:21:25.807851   30723 main.go:141] libmachine: (ha-863044-m02)     </serial>
	I0815 00:21:25.807860   30723 main.go:141] libmachine: (ha-863044-m02)     <console type='pty'>
	I0815 00:21:25.807870   30723 main.go:141] libmachine: (ha-863044-m02)       <target type='serial' port='0'/>
	I0815 00:21:25.807879   30723 main.go:141] libmachine: (ha-863044-m02)     </console>
	I0815 00:21:25.807884   30723 main.go:141] libmachine: (ha-863044-m02)     <rng model='virtio'>
	I0815 00:21:25.807898   30723 main.go:141] libmachine: (ha-863044-m02)       <backend model='random'>/dev/random</backend>
	I0815 00:21:25.807912   30723 main.go:141] libmachine: (ha-863044-m02)     </rng>
	I0815 00:21:25.807927   30723 main.go:141] libmachine: (ha-863044-m02)     
	I0815 00:21:25.807941   30723 main.go:141] libmachine: (ha-863044-m02)     
	I0815 00:21:25.807954   30723 main.go:141] libmachine: (ha-863044-m02)   </devices>
	I0815 00:21:25.807965   30723 main.go:141] libmachine: (ha-863044-m02) </domain>
	I0815 00:21:25.807980   30723 main.go:141] libmachine: (ha-863044-m02) 
	I0815 00:21:25.814743   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:5a:e2:de in network default
	I0815 00:21:25.815224   30723 main.go:141] libmachine: (ha-863044-m02) Ensuring networks are active...
	I0815 00:21:25.815240   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:25.815967   30723 main.go:141] libmachine: (ha-863044-m02) Ensuring network default is active
	I0815 00:21:25.816265   30723 main.go:141] libmachine: (ha-863044-m02) Ensuring network mk-ha-863044 is active
	I0815 00:21:25.816696   30723 main.go:141] libmachine: (ha-863044-m02) Getting domain xml...
	I0815 00:21:25.817316   30723 main.go:141] libmachine: (ha-863044-m02) Creating domain...
	I0815 00:21:27.102595   30723 main.go:141] libmachine: (ha-863044-m02) Waiting to get IP...
	I0815 00:21:27.103754   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:27.104274   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:27.104329   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:27.104257   31086 retry.go:31] will retry after 249.806387ms: waiting for machine to come up
	I0815 00:21:27.356115   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:27.356670   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:27.356700   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:27.356604   31086 retry.go:31] will retry after 272.897696ms: waiting for machine to come up
	I0815 00:21:27.630829   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:27.631362   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:27.631388   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:27.631302   31086 retry.go:31] will retry after 423.643372ms: waiting for machine to come up
	I0815 00:21:28.056689   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:28.057185   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:28.057214   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:28.057141   31086 retry.go:31] will retry after 429.885873ms: waiting for machine to come up
	I0815 00:21:28.488749   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:28.489187   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:28.489213   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:28.489151   31086 retry.go:31] will retry after 564.842329ms: waiting for machine to come up
	I0815 00:21:29.055916   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:29.056538   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:29.056573   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:29.056419   31086 retry.go:31] will retry after 952.116011ms: waiting for machine to come up
	I0815 00:21:30.009650   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:30.010110   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:30.010136   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:30.010074   31086 retry.go:31] will retry after 1.163406803s: waiting for machine to come up
	I0815 00:21:31.175551   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:31.175942   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:31.175969   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:31.175901   31086 retry.go:31] will retry after 1.339715785s: waiting for machine to come up
	I0815 00:21:32.517344   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:32.517754   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:32.517784   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:32.517702   31086 retry.go:31] will retry after 1.542004388s: waiting for machine to come up
	I0815 00:21:34.061553   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:34.061997   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:34.062033   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:34.061936   31086 retry.go:31] will retry after 1.693143598s: waiting for machine to come up
	I0815 00:21:35.756552   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:35.756971   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:35.756997   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:35.756920   31086 retry.go:31] will retry after 2.225684381s: waiting for machine to come up
	I0815 00:21:37.985128   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:37.985577   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:37.985616   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:37.985542   31086 retry.go:31] will retry after 3.575835042s: waiting for machine to come up
	I0815 00:21:41.563129   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:41.563608   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:41.563645   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:41.563567   31086 retry.go:31] will retry after 4.387259926s: waiting for machine to come up
	I0815 00:21:45.951832   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:45.952383   30723 main.go:141] libmachine: (ha-863044-m02) Found IP for machine: 192.168.39.170
	I0815 00:21:45.952413   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has current primary IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:45.952422   30723 main.go:141] libmachine: (ha-863044-m02) Reserving static IP address...
	I0815 00:21:45.953020   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find host DHCP lease matching {name: "ha-863044-m02", mac: "52:54:00:4e:19:c9", ip: "192.168.39.170"} in network mk-ha-863044
	I0815 00:21:46.024826   30723 main.go:141] libmachine: (ha-863044-m02) Reserved static IP address: 192.168.39.170
	I0815 00:21:46.024848   30723 main.go:141] libmachine: (ha-863044-m02) Waiting for SSH to be available...
	I0815 00:21:46.024861   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Getting to WaitForSSH function...
	I0815 00:21:46.027685   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:46.027990   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044
	I0815 00:21:46.028015   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find defined IP address of network mk-ha-863044 interface with MAC address 52:54:00:4e:19:c9
	I0815 00:21:46.028152   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Using SSH client type: external
	I0815 00:21:46.028178   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa (-rw-------)
	I0815 00:21:46.028207   30723 main.go:141] libmachine: (ha-863044-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 00:21:46.028220   30723 main.go:141] libmachine: (ha-863044-m02) DBG | About to run SSH command:
	I0815 00:21:46.028239   30723 main.go:141] libmachine: (ha-863044-m02) DBG | exit 0
	I0815 00:21:46.031878   30723 main.go:141] libmachine: (ha-863044-m02) DBG | SSH cmd err, output: exit status 255: 
	I0815 00:21:46.031898   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0815 00:21:46.031904   30723 main.go:141] libmachine: (ha-863044-m02) DBG | command : exit 0
	I0815 00:21:46.031910   30723 main.go:141] libmachine: (ha-863044-m02) DBG | err     : exit status 255
	I0815 00:21:46.031934   30723 main.go:141] libmachine: (ha-863044-m02) DBG | output  : 
	I0815 00:21:49.033998   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Getting to WaitForSSH function...
	I0815 00:21:49.036538   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.036885   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.036912   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.036973   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Using SSH client type: external
	I0815 00:21:49.037071   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa (-rw-------)
	I0815 00:21:49.037108   30723 main.go:141] libmachine: (ha-863044-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 00:21:49.037122   30723 main.go:141] libmachine: (ha-863044-m02) DBG | About to run SSH command:
	I0815 00:21:49.037136   30723 main.go:141] libmachine: (ha-863044-m02) DBG | exit 0
	I0815 00:21:49.160317   30723 main.go:141] libmachine: (ha-863044-m02) DBG | SSH cmd err, output: <nil>: 
	I0815 00:21:49.160617   30723 main.go:141] libmachine: (ha-863044-m02) KVM machine creation complete!
	I0815 00:21:49.160936   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetConfigRaw
	I0815 00:21:49.161565   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:49.161757   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:49.161925   30723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 00:21:49.161957   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetState
	I0815 00:21:49.163197   30723 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 00:21:49.163209   30723 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 00:21:49.163219   30723 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 00:21:49.163225   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.165390   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.165748   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.165772   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.165893   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:49.166042   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.166183   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.166294   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:49.166448   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:49.166692   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0815 00:21:49.166706   30723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 00:21:49.263652   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:21:49.263679   30723 main.go:141] libmachine: Detecting the provisioner...
	I0815 00:21:49.263691   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.266383   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.266754   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.266782   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.266936   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:49.267119   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.267264   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.267429   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:49.267590   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:49.267753   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0815 00:21:49.267764   30723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 00:21:49.368752   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 00:21:49.368818   30723 main.go:141] libmachine: found compatible host: buildroot
	I0815 00:21:49.368827   30723 main.go:141] libmachine: Provisioning with buildroot...
	I0815 00:21:49.368837   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetMachineName
	I0815 00:21:49.369052   30723 buildroot.go:166] provisioning hostname "ha-863044-m02"
	I0815 00:21:49.369074   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetMachineName
	I0815 00:21:49.369236   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.371734   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.372061   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.372085   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.372221   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:49.372404   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.372539   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.372672   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:49.372814   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:49.372996   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0815 00:21:49.373009   30723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863044-m02 && echo "ha-863044-m02" | sudo tee /etc/hostname
	I0815 00:21:49.485265   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863044-m02
	
	I0815 00:21:49.485298   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.487683   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.488034   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.488062   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.488238   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:49.488422   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.488583   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.488740   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:49.488896   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:49.489094   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0815 00:21:49.489113   30723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863044-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863044-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863044-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:21:49.596979   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:21:49.597004   30723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 00:21:49.597037   30723 buildroot.go:174] setting up certificates
	I0815 00:21:49.597047   30723 provision.go:84] configureAuth start
	I0815 00:21:49.597061   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetMachineName
	I0815 00:21:49.597333   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:21:49.599655   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.599967   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.599992   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.600116   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.601985   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.602314   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.602340   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.602470   30723 provision.go:143] copyHostCerts
	I0815 00:21:49.602512   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:21:49.602544   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 00:21:49.602552   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:21:49.602618   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 00:21:49.602707   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:21:49.602725   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 00:21:49.602729   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:21:49.602753   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 00:21:49.602794   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:21:49.602811   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 00:21:49.602817   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:21:49.602839   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 00:21:49.602884   30723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.ha-863044-m02 san=[127.0.0.1 192.168.39.170 ha-863044-m02 localhost minikube]
	I0815 00:21:49.779877   30723 provision.go:177] copyRemoteCerts
	I0815 00:21:49.779934   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:21:49.779970   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.782304   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.782598   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.782627   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.782861   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:49.783064   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.783190   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:49.783323   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	I0815 00:21:49.861771   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 00:21:49.861843   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:21:49.888019   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 00:21:49.888091   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 00:21:49.910750   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 00:21:49.910825   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 00:21:49.932521   30723 provision.go:87] duration metric: took 335.457393ms to configureAuth
	I0815 00:21:49.932555   30723 buildroot.go:189] setting minikube options for container-runtime
	I0815 00:21:49.932790   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:21:49.932903   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.935628   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.936015   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.936046   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.936200   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:49.936403   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.936583   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.936753   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:49.936914   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:49.937086   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0815 00:21:49.937106   30723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:21:50.205561   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:21:50.205586   30723 main.go:141] libmachine: Checking connection to Docker...
	I0815 00:21:50.205596   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetURL
	I0815 00:21:50.206889   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Using libvirt version 6000000
	I0815 00:21:50.208898   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.209228   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.209259   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.209398   30723 main.go:141] libmachine: Docker is up and running!
	I0815 00:21:50.209411   30723 main.go:141] libmachine: Reticulating splines...
	I0815 00:21:50.209417   30723 client.go:171] duration metric: took 24.827007326s to LocalClient.Create
	I0815 00:21:50.209439   30723 start.go:167] duration metric: took 24.827058894s to libmachine.API.Create "ha-863044"
	I0815 00:21:50.209448   30723 start.go:293] postStartSetup for "ha-863044-m02" (driver="kvm2")
	I0815 00:21:50.209457   30723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:21:50.209477   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:50.209698   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:21:50.209717   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:50.211828   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.212089   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.212110   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.212311   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:50.212484   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:50.212674   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:50.212798   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	I0815 00:21:50.290097   30723 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:21:50.293623   30723 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 00:21:50.293643   30723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 00:21:50.293698   30723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 00:21:50.293765   30723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 00:21:50.293774   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /etc/ssl/certs/202792.pem
	I0815 00:21:50.293852   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 00:21:50.302156   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:21:50.323245   30723 start.go:296] duration metric: took 113.784495ms for postStartSetup
	I0815 00:21:50.323298   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetConfigRaw
	I0815 00:21:50.323809   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:21:50.326686   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.327080   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.327114   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.327346   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:21:50.327522   30723 start.go:128] duration metric: took 24.964146227s to createHost
	I0815 00:21:50.327589   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:50.329748   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.330035   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.330062   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.330157   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:50.330327   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:50.330475   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:50.330594   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:50.330773   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:50.330964   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0815 00:21:50.330974   30723 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 00:21:50.428976   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723681310.408092904
	
	I0815 00:21:50.429000   30723 fix.go:216] guest clock: 1723681310.408092904
	I0815 00:21:50.429009   30723 fix.go:229] Guest: 2024-08-15 00:21:50.408092904 +0000 UTC Remote: 2024-08-15 00:21:50.327531716 +0000 UTC m=+72.479681123 (delta=80.561188ms)
	I0815 00:21:50.429027   30723 fix.go:200] guest clock delta is within tolerance: 80.561188ms
	I0815 00:21:50.429032   30723 start.go:83] releasing machines lock for "ha-863044-m02", held for 25.06576938s
	I0815 00:21:50.429051   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:50.429294   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:21:50.431823   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.432221   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.432266   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.433808   30723 out.go:177] * Found network options:
	I0815 00:21:50.435079   30723 out.go:177]   - NO_PROXY=192.168.39.6
	W0815 00:21:50.436335   30723 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 00:21:50.436363   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:50.436877   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:50.437062   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:50.437163   30723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:21:50.437197   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	W0815 00:21:50.437222   30723 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 00:21:50.437303   30723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:21:50.437326   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:50.439994   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.440018   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.440367   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.440404   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.440426   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.440440   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.440598   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:50.440702   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:50.440759   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:50.440824   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:50.440885   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:50.440932   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:50.440984   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	I0815 00:21:50.441025   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	I0815 00:21:50.661475   30723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 00:21:50.667943   30723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 00:21:50.667998   30723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:21:50.682256   30723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 00:21:50.682273   30723 start.go:495] detecting cgroup driver to use...
	I0815 00:21:50.682338   30723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:21:50.699500   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:21:50.714377   30723 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:21:50.714440   30723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:21:50.727274   30723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:21:50.739883   30723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:21:50.865517   30723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:21:51.003747   30723 docker.go:233] disabling docker service ...
	I0815 00:21:51.003820   30723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:21:51.017352   30723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:21:51.029133   30723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:21:51.154451   30723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:21:51.288112   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:21:51.301260   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:21:51.318378   30723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:21:51.318455   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.328767   30723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:21:51.328833   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.338383   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.347603   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.356884   30723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:21:51.366397   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.375473   30723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.390631   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.400012   30723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:21:51.408511   30723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 00:21:51.408566   30723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 00:21:51.420541   30723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:21:51.429688   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:21:51.547869   30723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:21:51.678328   30723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:21:51.678409   30723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:21:51.683203   30723 start.go:563] Will wait 60s for crictl version
	I0815 00:21:51.683252   30723 ssh_runner.go:195] Run: which crictl
	I0815 00:21:51.686421   30723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:21:51.723286   30723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 00:21:51.723366   30723 ssh_runner.go:195] Run: crio --version
	I0815 00:21:51.750523   30723 ssh_runner.go:195] Run: crio --version
	I0815 00:21:51.779239   30723 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 00:21:51.780623   30723 out.go:177]   - env NO_PROXY=192.168.39.6
	I0815 00:21:51.781870   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:21:51.784550   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:51.784942   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:51.784961   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:51.785205   30723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 00:21:51.789029   30723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:21:51.800154   30723 mustload.go:65] Loading cluster: ha-863044
	I0815 00:21:51.800379   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:21:51.800761   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:51.800805   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:51.815216   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0815 00:21:51.815597   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:51.816063   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:51.816078   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:51.816341   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:51.816569   30723 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:21:51.818064   30723 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:21:51.818350   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:51.818387   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:51.832329   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0815 00:21:51.832783   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:51.833215   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:51.833235   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:51.833491   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:51.833636   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:51.833803   30723 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044 for IP: 192.168.39.170
	I0815 00:21:51.833815   30723 certs.go:194] generating shared ca certs ...
	I0815 00:21:51.833831   30723 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:51.833956   30723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 00:21:51.833992   30723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 00:21:51.834001   30723 certs.go:256] generating profile certs ...
	I0815 00:21:51.834064   30723 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key
	I0815 00:21:51.834087   30723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.e124014b
	I0815 00:21:51.834100   30723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.e124014b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.170 192.168.39.254]
	I0815 00:21:52.092271   30723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.e124014b ...
	I0815 00:21:52.092297   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.e124014b: {Name:mk8be6d74c43afd827f181e50df7652f38161e5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:52.092463   30723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.e124014b ...
	I0815 00:21:52.092476   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.e124014b: {Name:mk511d913c107fd588a9cf8a0c3a2ef42984fd3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:52.092542   30723 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.e124014b -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt
	I0815 00:21:52.092700   30723 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.e124014b -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key
	I0815 00:21:52.092850   30723 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key
	I0815 00:21:52.092865   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 00:21:52.092880   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 00:21:52.092893   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 00:21:52.092905   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 00:21:52.092918   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 00:21:52.092930   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 00:21:52.092943   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 00:21:52.092955   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 00:21:52.093002   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 00:21:52.093029   30723 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 00:21:52.093038   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:21:52.093059   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:21:52.093080   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:21:52.093100   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 00:21:52.093135   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:21:52.093160   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem -> /usr/share/ca-certificates/20279.pem
	I0815 00:21:52.093173   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /usr/share/ca-certificates/202792.pem
	I0815 00:21:52.093185   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:52.093213   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:52.096735   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:52.097221   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:52.097241   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:52.097446   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:52.097649   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:52.097794   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:52.097962   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:52.181040   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0815 00:21:52.185719   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 00:21:52.196184   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0815 00:21:52.199804   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0815 00:21:52.209520   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 00:21:52.213244   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 00:21:52.224011   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0815 00:21:52.227492   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 00:21:52.237306   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0815 00:21:52.240797   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 00:21:52.250198   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0815 00:21:52.253751   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0815 00:21:52.263515   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:21:52.287634   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:21:52.309806   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:21:52.331532   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:21:52.353311   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0815 00:21:52.375376   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 00:21:52.400179   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:21:52.421867   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 00:21:52.443162   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 00:21:52.464906   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 00:21:52.486390   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:21:52.507486   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 00:21:52.522468   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0815 00:21:52.537690   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 00:21:52.553421   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 00:21:52.568859   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 00:21:52.584224   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0815 00:21:52.599035   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 00:21:52.613930   30723 ssh_runner.go:195] Run: openssl version
	I0815 00:21:52.619258   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 00:21:52.628625   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 00:21:52.632994   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 00:21:52.633044   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 00:21:52.638788   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 00:21:52.649038   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:21:52.659230   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:52.663224   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:52.663272   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:52.668363   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:21:52.677457   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 00:21:52.686687   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 00:21:52.690555   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 00:21:52.690605   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 00:21:52.695555   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 00:21:52.704856   30723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:21:52.708314   30723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:21:52.708361   30723 kubeadm.go:934] updating node {m02 192.168.39.170 8443 v1.31.0 crio true true} ...
	I0815 00:21:52.708439   30723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863044-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:21:52.708469   30723 kube-vip.go:115] generating kube-vip config ...
	I0815 00:21:52.708507   30723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 00:21:52.724921   30723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 00:21:52.724980   30723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 00:21:52.725035   30723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:21:52.733943   30723 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0815 00:21:52.733999   30723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0815 00:21:52.742668   30723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0815 00:21:52.742694   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 00:21:52.742736   30723 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0815 00:21:52.742766   30723 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0815 00:21:52.742767   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 00:21:52.746971   30723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0815 00:21:52.746991   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0815 00:21:54.975701   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:21:54.989491   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 00:21:54.989597   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 00:21:54.993221   30723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0815 00:21:54.993246   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0815 00:21:55.520848   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 00:21:55.520956   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 00:21:55.525966   30723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0815 00:21:55.526000   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0815 00:21:55.739980   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 00:21:55.748562   30723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0815 00:21:55.763555   30723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:21:55.778081   30723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 00:21:55.793097   30723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 00:21:55.796583   30723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:21:55.807629   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:21:55.938533   30723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:21:55.955576   30723 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:21:55.956016   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:55.956068   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:55.970773   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0815 00:21:55.971258   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:55.971792   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:55.971813   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:55.972206   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:55.972382   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:55.972568   30723 start.go:317] joinCluster: &{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:21:55.972702   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0815 00:21:55.972727   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:55.975640   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:55.976046   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:55.976074   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:55.976206   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:55.976378   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:55.976527   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:55.976696   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:56.132045   30723 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:21:56.132103   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f85zt8.dk03u657aanxbkpc --discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863044-m02 --control-plane --apiserver-advertise-address=192.168.39.170 --apiserver-bind-port=8443"
	I0815 00:22:17.902402   30723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f85zt8.dk03u657aanxbkpc --discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863044-m02 --control-plane --apiserver-advertise-address=192.168.39.170 --apiserver-bind-port=8443": (21.770273412s)
	I0815 00:22:17.902495   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0815 00:22:18.486275   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-863044-m02 minikube.k8s.io/updated_at=2024_08_15T00_22_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=ha-863044 minikube.k8s.io/primary=false
	I0815 00:22:18.625669   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-863044-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0815 00:22:18.771489   30723 start.go:319] duration metric: took 22.798918544s to joinCluster
	I0815 00:22:18.771602   30723 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:22:18.771919   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:22:18.773284   30723 out.go:177] * Verifying Kubernetes components...
	I0815 00:22:18.774595   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:22:18.998202   30723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:22:19.012004   30723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:22:19.012223   30723 kapi.go:59] client config for ha-863044: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.crt", KeyFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key", CAFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 00:22:19.012272   30723 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.6:8443
	I0815 00:22:19.012501   30723 node_ready.go:35] waiting up to 6m0s for node "ha-863044-m02" to be "Ready" ...
	I0815 00:22:19.012587   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:19.012596   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:19.012603   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:19.012607   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:19.038987   30723 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0815 00:22:19.512830   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:19.512846   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:19.512857   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:19.512863   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:19.516445   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:20.013359   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:20.013381   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:20.013392   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:20.013401   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:20.017754   30723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 00:22:20.513504   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:20.513532   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:20.513543   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:20.513550   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:20.516750   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:21.013595   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:21.013619   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:21.013628   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:21.013631   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:21.016614   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:21.017204   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:21.513565   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:21.513594   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:21.513603   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:21.513607   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:21.516521   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:22.013091   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:22.013111   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:22.013120   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:22.013123   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:22.016446   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:22.513547   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:22.513574   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:22.513585   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:22.513592   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:22.516694   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:23.013216   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:23.013243   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:23.013254   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:23.013259   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:23.023121   30723 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0815 00:22:23.023774   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:23.512826   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:23.512849   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:23.512859   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:23.512864   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:23.515760   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:24.012704   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:24.012724   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:24.012732   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:24.012735   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:24.016299   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:24.513521   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:24.513544   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:24.513563   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:24.513569   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:24.517034   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:25.012863   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:25.012885   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:25.012896   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:25.012901   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:25.140822   30723 round_trippers.go:574] Response Status: 200 OK in 127 milliseconds
	I0815 00:22:25.141378   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:25.513650   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:25.513676   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:25.513686   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:25.513692   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:25.516868   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:26.012996   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:26.013015   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:26.013026   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:26.013036   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:26.025110   30723 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0815 00:22:26.512830   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:26.512851   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:26.512865   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:26.512869   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:26.516139   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:27.013040   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:27.013062   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:27.013074   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:27.013079   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:27.016495   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:27.513481   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:27.513504   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:27.513513   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:27.513520   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:27.516356   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:27.517133   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:28.013289   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:28.013318   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:28.013326   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:28.013330   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:28.016534   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:28.513573   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:28.513594   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:28.513602   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:28.513607   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:28.516770   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:29.012800   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:29.012822   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:29.012830   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:29.012833   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:29.016035   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:29.512918   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:29.512940   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:29.512947   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:29.512952   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:29.516290   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:30.013327   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:30.013351   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:30.013358   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:30.013362   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:30.016360   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:30.016850   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:30.513706   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:30.513726   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:30.513734   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:30.513739   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:30.516585   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:31.013105   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:31.013125   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:31.013133   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:31.013137   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:31.016090   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:31.512809   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:31.512841   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:31.512849   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:31.512852   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:31.515972   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:32.012770   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:32.012790   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:32.012798   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:32.012802   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:32.015906   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:32.512695   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:32.512716   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:32.512725   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:32.512728   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:32.515632   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:32.516406   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:33.013512   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:33.013533   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:33.013546   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:33.013550   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:33.016320   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:33.513289   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:33.513309   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:33.513316   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:33.513320   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:33.516207   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:34.013139   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:34.013161   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:34.013169   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:34.013172   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:34.016179   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:34.512839   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:34.512865   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:34.512876   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:34.512882   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:34.515453   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:35.012712   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:35.012736   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:35.012748   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:35.012754   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:35.022959   30723 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0815 00:22:35.023356   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:35.513191   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:35.513214   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:35.513225   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:35.513230   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:35.516137   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:36.013509   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:36.013530   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:36.013538   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:36.013541   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:36.016798   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:36.512836   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:36.512862   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:36.512872   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:36.512878   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:36.516281   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:37.013011   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:37.013031   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.013039   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.013042   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.016590   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:37.017079   30723 node_ready.go:49] node "ha-863044-m02" has status "Ready":"True"
	I0815 00:22:37.017096   30723 node_ready.go:38] duration metric: took 18.004580218s for node "ha-863044-m02" to be "Ready" ...
	I0815 00:22:37.017113   30723 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:22:37.017173   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:22:37.017181   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.017190   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.017194   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.021592   30723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 00:22:37.027616   30723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-bc2jh" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.027697   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-bc2jh
	I0815 00:22:37.027707   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.027713   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.027722   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.030221   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.030983   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:37.030994   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.031001   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.031004   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.033177   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.033629   30723 pod_ready.go:92] pod "coredns-6f6b679f8f-bc2jh" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:37.033649   30723 pod_ready.go:81] duration metric: took 6.01329ms for pod "coredns-6f6b679f8f-bc2jh" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.033657   30723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-jxpqd" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.033699   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-jxpqd
	I0815 00:22:37.033706   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.033712   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.033715   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.036052   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.036832   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:37.036845   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.036852   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.036855   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.038842   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:22:37.039438   30723 pod_ready.go:92] pod "coredns-6f6b679f8f-jxpqd" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:37.039453   30723 pod_ready.go:81] duration metric: took 5.791539ms for pod "coredns-6f6b679f8f-jxpqd" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.039461   30723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.039501   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863044
	I0815 00:22:37.039509   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.039515   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.039519   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.041705   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.042407   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:37.042419   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.042426   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.042430   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.044326   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:22:37.044772   30723 pod_ready.go:92] pod "etcd-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:37.044785   30723 pod_ready.go:81] duration metric: took 5.319056ms for pod "etcd-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.044793   30723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.044829   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863044-m02
	I0815 00:22:37.044836   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.044843   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.044847   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.046831   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:22:37.047403   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:37.047415   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.047421   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.047424   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.049788   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.050423   30723 pod_ready.go:92] pod "etcd-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:37.050441   30723 pod_ready.go:81] duration metric: took 5.642321ms for pod "etcd-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.050458   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.213835   30723 request.go:632] Waited for 163.317682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044
	I0815 00:22:37.213904   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044
	I0815 00:22:37.213909   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.213917   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.213923   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.216844   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.413793   30723 request.go:632] Waited for 196.360496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:37.413861   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:37.413869   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.413880   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.413886   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.416825   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.417435   30723 pod_ready.go:92] pod "kube-apiserver-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:37.417453   30723 pod_ready.go:81] duration metric: took 366.985345ms for pod "kube-apiserver-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.417463   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.613560   30723 request.go:632] Waited for 196.017014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044-m02
	I0815 00:22:37.613619   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044-m02
	I0815 00:22:37.613627   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.613635   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.613644   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.616818   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:37.813823   30723 request.go:632] Waited for 196.341076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:37.813879   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:37.813885   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.813892   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.813895   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.816850   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.817577   30723 pod_ready.go:92] pod "kube-apiserver-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:37.817593   30723 pod_ready.go:81] duration metric: took 400.124302ms for pod "kube-apiserver-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.817602   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:38.013401   30723 request.go:632] Waited for 195.726582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044
	I0815 00:22:38.013473   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044
	I0815 00:22:38.013478   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:38.013485   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:38.013489   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:38.016577   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:38.213581   30723 request.go:632] Waited for 196.359714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:38.213654   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:38.213659   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:38.213668   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:38.213672   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:38.216766   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:38.217137   30723 pod_ready.go:92] pod "kube-controller-manager-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:38.217155   30723 pod_ready.go:81] duration metric: took 399.546691ms for pod "kube-controller-manager-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:38.217163   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:38.413330   30723 request.go:632] Waited for 196.094896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044-m02
	I0815 00:22:38.413389   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044-m02
	I0815 00:22:38.413395   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:38.413402   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:38.413407   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:38.416538   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:38.613841   30723 request.go:632] Waited for 196.434899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:38.613918   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:38.613927   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:38.613935   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:38.613941   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:38.617214   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:38.617747   30723 pod_ready.go:92] pod "kube-controller-manager-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:38.617773   30723 pod_ready.go:81] duration metric: took 400.603334ms for pod "kube-controller-manager-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:38.617789   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6l4gp" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:38.813842   30723 request.go:632] Waited for 195.963426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6l4gp
	I0815 00:22:38.813893   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6l4gp
	I0815 00:22:38.813899   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:38.813906   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:38.813911   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:38.816702   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:39.013619   30723 request.go:632] Waited for 196.34729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:39.013706   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:39.013714   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:39.013722   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:39.013726   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:39.016543   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:39.017139   30723 pod_ready.go:92] pod "kube-proxy-6l4gp" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:39.017157   30723 pod_ready.go:81] duration metric: took 399.360176ms for pod "kube-proxy-6l4gp" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:39.017169   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-758vr" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:39.213268   30723 request.go:632] Waited for 196.035432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-758vr
	I0815 00:22:39.213347   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-758vr
	I0815 00:22:39.213354   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:39.213361   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:39.213364   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:39.216285   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:39.413361   30723 request.go:632] Waited for 196.348438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:39.413427   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:39.413434   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:39.413444   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:39.413453   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:39.416456   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:39.417033   30723 pod_ready.go:92] pod "kube-proxy-758vr" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:39.417051   30723 pod_ready.go:81] duration metric: took 399.876068ms for pod "kube-proxy-758vr" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:39.417060   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:39.613052   30723 request.go:632] Waited for 195.936806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044
	I0815 00:22:39.613116   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044
	I0815 00:22:39.613123   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:39.613133   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:39.613139   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:39.616328   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:39.813503   30723 request.go:632] Waited for 196.344352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:39.813571   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:39.813576   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:39.813584   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:39.813591   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:39.816987   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:39.817641   30723 pod_ready.go:92] pod "kube-scheduler-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:39.817664   30723 pod_ready.go:81] duration metric: took 400.594569ms for pod "kube-scheduler-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:39.817676   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:40.013706   30723 request.go:632] Waited for 195.955688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044-m02
	I0815 00:22:40.013765   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044-m02
	I0815 00:22:40.013770   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:40.013778   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:40.013781   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:40.016871   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:40.213637   30723 request.go:632] Waited for 196.191598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:40.213709   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:40.213719   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:40.213728   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:40.213734   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:40.217048   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:40.217846   30723 pod_ready.go:92] pod "kube-scheduler-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:40.217866   30723 pod_ready.go:81] duration metric: took 400.177976ms for pod "kube-scheduler-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:40.217880   30723 pod_ready.go:38] duration metric: took 3.200753657s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:22:40.217898   30723 api_server.go:52] waiting for apiserver process to appear ...
	I0815 00:22:40.217952   30723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:22:40.233279   30723 api_server.go:72] duration metric: took 21.461634198s to wait for apiserver process to appear ...
	I0815 00:22:40.233296   30723 api_server.go:88] waiting for apiserver healthz status ...
	I0815 00:22:40.233312   30723 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0815 00:22:40.240396   30723 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0815 00:22:40.240466   30723 round_trippers.go:463] GET https://192.168.39.6:8443/version
	I0815 00:22:40.240476   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:40.240487   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:40.240496   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:40.241592   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:22:40.241712   30723 api_server.go:141] control plane version: v1.31.0
	I0815 00:22:40.241727   30723 api_server.go:131] duration metric: took 8.426075ms to wait for apiserver health ...
	I0815 00:22:40.241735   30723 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 00:22:40.413319   30723 request.go:632] Waited for 171.496588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:22:40.413371   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:22:40.413376   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:40.413383   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:40.413388   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:40.418439   30723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 00:22:40.422523   30723 system_pods.go:59] 17 kube-system pods found
	I0815 00:22:40.422546   30723 system_pods.go:61] "coredns-6f6b679f8f-bc2jh" [77760785-a989-4c45-a8e0-e758db3a252b] Running
	I0815 00:22:40.422551   30723 system_pods.go:61] "coredns-6f6b679f8f-jxpqd" [72e46071-4563-4c8c-a269-c32c4d0fced3] Running
	I0815 00:22:40.422554   30723 system_pods.go:61] "etcd-ha-863044" [e41d94d6-4a69-49a3-93bc-d726a95b08b2] Running
	I0815 00:22:40.422558   30723 system_pods.go:61] "etcd-ha-863044-m02" [1c022b82-287f-493c-89ff-3aa70264c39a] Running
	I0815 00:22:40.422561   30723 system_pods.go:61] "kindnet-ptbpb" [b1fee332-fbc7-4b7b-818a-9ba398dce43e] Running
	I0815 00:22:40.422564   30723 system_pods.go:61] "kindnet-xpnzd" [6cd2a4c8-3c5f-4860-90bb-23a8c6f72a15] Running
	I0815 00:22:40.422567   30723 system_pods.go:61] "kube-apiserver-ha-863044" [52bc4344-75cb-4659-a1df-db580ad5d026] Running
	I0815 00:22:40.422570   30723 system_pods.go:61] "kube-apiserver-ha-863044-m02" [087ef288-843d-44fc-9c5b-1b302f6d2906] Running
	I0815 00:22:40.422573   30723 system_pods.go:61] "kube-controller-manager-ha-863044" [4539aebc-86af-4e9f-8736-348d90f3981d] Running
	I0815 00:22:40.422576   30723 system_pods.go:61] "kube-controller-manager-ha-863044-m02" [a0c27335-3bc0-4a2e-9875-0c736b47a4b1] Running
	I0815 00:22:40.422579   30723 system_pods.go:61] "kube-proxy-6l4gp" [85ddf43f-82b7-4325-a5d8-d4f2242b4e7c] Running
	I0815 00:22:40.422582   30723 system_pods.go:61] "kube-proxy-758vr" [0963208c-92ef-4625-8805-1c8ad8ae7b51] Running
	I0815 00:22:40.422585   30723 system_pods.go:61] "kube-scheduler-ha-863044" [84013745-813a-4eab-a9a5-6edd28301611] Running
	I0815 00:22:40.422587   30723 system_pods.go:61] "kube-scheduler-ha-863044-m02" [62650272-5fa7-4ff2-83b5-6cb6f84d497b] Running
	I0815 00:22:40.422590   30723 system_pods.go:61] "kube-vip-ha-863044" [ff875a81-1ee8-4073-a666-4f9dc4239e38] Running
	I0815 00:22:40.422593   30723 system_pods.go:61] "kube-vip-ha-863044-m02" [e9f868e0-44af-4e2b-8699-a88d1a752594] Running
	I0815 00:22:40.422596   30723 system_pods.go:61] "storage-provisioner" [a7565569-2f8c-4393-b4f8-b8548d65f794] Running
	I0815 00:22:40.422601   30723 system_pods.go:74] duration metric: took 180.861182ms to wait for pod list to return data ...
	I0815 00:22:40.422611   30723 default_sa.go:34] waiting for default service account to be created ...
	I0815 00:22:40.613804   30723 request.go:632] Waited for 191.125258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0815 00:22:40.613855   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0815 00:22:40.613863   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:40.613870   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:40.613876   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:40.617566   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:40.617782   30723 default_sa.go:45] found service account: "default"
	I0815 00:22:40.617795   30723 default_sa.go:55] duration metric: took 195.179763ms for default service account to be created ...
	I0815 00:22:40.617803   30723 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 00:22:40.813165   30723 request.go:632] Waited for 195.287376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:22:40.813212   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:22:40.813218   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:40.813225   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:40.813229   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:40.817620   30723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 00:22:40.821578   30723 system_pods.go:86] 17 kube-system pods found
	I0815 00:22:40.821600   30723 system_pods.go:89] "coredns-6f6b679f8f-bc2jh" [77760785-a989-4c45-a8e0-e758db3a252b] Running
	I0815 00:22:40.821606   30723 system_pods.go:89] "coredns-6f6b679f8f-jxpqd" [72e46071-4563-4c8c-a269-c32c4d0fced3] Running
	I0815 00:22:40.821610   30723 system_pods.go:89] "etcd-ha-863044" [e41d94d6-4a69-49a3-93bc-d726a95b08b2] Running
	I0815 00:22:40.821614   30723 system_pods.go:89] "etcd-ha-863044-m02" [1c022b82-287f-493c-89ff-3aa70264c39a] Running
	I0815 00:22:40.821620   30723 system_pods.go:89] "kindnet-ptbpb" [b1fee332-fbc7-4b7b-818a-9ba398dce43e] Running
	I0815 00:22:40.821624   30723 system_pods.go:89] "kindnet-xpnzd" [6cd2a4c8-3c5f-4860-90bb-23a8c6f72a15] Running
	I0815 00:22:40.821628   30723 system_pods.go:89] "kube-apiserver-ha-863044" [52bc4344-75cb-4659-a1df-db580ad5d026] Running
	I0815 00:22:40.821632   30723 system_pods.go:89] "kube-apiserver-ha-863044-m02" [087ef288-843d-44fc-9c5b-1b302f6d2906] Running
	I0815 00:22:40.821641   30723 system_pods.go:89] "kube-controller-manager-ha-863044" [4539aebc-86af-4e9f-8736-348d90f3981d] Running
	I0815 00:22:40.821645   30723 system_pods.go:89] "kube-controller-manager-ha-863044-m02" [a0c27335-3bc0-4a2e-9875-0c736b47a4b1] Running
	I0815 00:22:40.821651   30723 system_pods.go:89] "kube-proxy-6l4gp" [85ddf43f-82b7-4325-a5d8-d4f2242b4e7c] Running
	I0815 00:22:40.821655   30723 system_pods.go:89] "kube-proxy-758vr" [0963208c-92ef-4625-8805-1c8ad8ae7b51] Running
	I0815 00:22:40.821659   30723 system_pods.go:89] "kube-scheduler-ha-863044" [84013745-813a-4eab-a9a5-6edd28301611] Running
	I0815 00:22:40.821663   30723 system_pods.go:89] "kube-scheduler-ha-863044-m02" [62650272-5fa7-4ff2-83b5-6cb6f84d497b] Running
	I0815 00:22:40.821669   30723 system_pods.go:89] "kube-vip-ha-863044" [ff875a81-1ee8-4073-a666-4f9dc4239e38] Running
	I0815 00:22:40.821673   30723 system_pods.go:89] "kube-vip-ha-863044-m02" [e9f868e0-44af-4e2b-8699-a88d1a752594] Running
	I0815 00:22:40.821677   30723 system_pods.go:89] "storage-provisioner" [a7565569-2f8c-4393-b4f8-b8548d65f794] Running
	I0815 00:22:40.821683   30723 system_pods.go:126] duration metric: took 203.876122ms to wait for k8s-apps to be running ...
	I0815 00:22:40.821692   30723 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 00:22:40.821734   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:22:40.838015   30723 system_svc.go:56] duration metric: took 16.314738ms WaitForService to wait for kubelet
	I0815 00:22:40.838036   30723 kubeadm.go:582] duration metric: took 22.066393295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:22:40.838053   30723 node_conditions.go:102] verifying NodePressure condition ...
	I0815 00:22:41.013823   30723 request.go:632] Waited for 175.704777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes
	I0815 00:22:41.013872   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0815 00:22:41.013877   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:41.013884   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:41.013888   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:41.017502   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:41.018221   30723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 00:22:41.018245   30723 node_conditions.go:123] node cpu capacity is 2
	I0815 00:22:41.018255   30723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 00:22:41.018260   30723 node_conditions.go:123] node cpu capacity is 2
	I0815 00:22:41.018264   30723 node_conditions.go:105] duration metric: took 180.206048ms to run NodePressure ...
	I0815 00:22:41.018274   30723 start.go:241] waiting for startup goroutines ...
	I0815 00:22:41.018297   30723 start.go:255] writing updated cluster config ...
	I0815 00:22:41.020376   30723 out.go:177] 
	I0815 00:22:41.021665   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:22:41.021741   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:22:41.023206   30723 out.go:177] * Starting "ha-863044-m03" control-plane node in "ha-863044" cluster
	I0815 00:22:41.024169   30723 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:22:41.024188   30723 cache.go:56] Caching tarball of preloaded images
	I0815 00:22:41.024275   30723 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:22:41.024285   30723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:22:41.024365   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:22:41.024511   30723 start.go:360] acquireMachinesLock for ha-863044-m03: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 00:22:41.024548   30723 start.go:364] duration metric: took 19.263µs to acquireMachinesLock for "ha-863044-m03"
	I0815 00:22:41.024562   30723 start.go:93] Provisioning new machine with config: &{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:22:41.024645   30723 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0815 00:22:41.025969   30723 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 00:22:41.026063   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:22:41.026100   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:22:41.040958   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I0815 00:22:41.041364   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:22:41.041802   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:22:41.041820   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:22:41.042132   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:22:41.042294   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetMachineName
	I0815 00:22:41.042405   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:22:41.042529   30723 start.go:159] libmachine.API.Create for "ha-863044" (driver="kvm2")
	I0815 00:22:41.042564   30723 client.go:168] LocalClient.Create starting
	I0815 00:22:41.042606   30723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem
	I0815 00:22:41.042651   30723 main.go:141] libmachine: Decoding PEM data...
	I0815 00:22:41.042672   30723 main.go:141] libmachine: Parsing certificate...
	I0815 00:22:41.042743   30723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem
	I0815 00:22:41.042776   30723 main.go:141] libmachine: Decoding PEM data...
	I0815 00:22:41.042797   30723 main.go:141] libmachine: Parsing certificate...
	I0815 00:22:41.042822   30723 main.go:141] libmachine: Running pre-create checks...
	I0815 00:22:41.042835   30723 main.go:141] libmachine: (ha-863044-m03) Calling .PreCreateCheck
	I0815 00:22:41.042984   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetConfigRaw
	I0815 00:22:41.043375   30723 main.go:141] libmachine: Creating machine...
	I0815 00:22:41.043389   30723 main.go:141] libmachine: (ha-863044-m03) Calling .Create
	I0815 00:22:41.043504   30723 main.go:141] libmachine: (ha-863044-m03) Creating KVM machine...
	I0815 00:22:41.044534   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found existing default KVM network
	I0815 00:22:41.044709   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found existing private KVM network mk-ha-863044
	I0815 00:22:41.044838   30723 main.go:141] libmachine: (ha-863044-m03) Setting up store path in /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03 ...
	I0815 00:22:41.044858   30723 main.go:141] libmachine: (ha-863044-m03) Building disk image from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 00:22:41.044917   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:41.044841   31483 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:22:41.045021   30723 main.go:141] libmachine: (ha-863044-m03) Downloading /home/jenkins/minikube-integration/19443-13088/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 00:22:41.269348   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:41.269218   31483 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa...
	I0815 00:22:41.379165   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:41.379064   31483 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/ha-863044-m03.rawdisk...
	I0815 00:22:41.379193   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Writing magic tar header
	I0815 00:22:41.379207   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Writing SSH key tar header
	I0815 00:22:41.379218   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:41.379188   31483 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03 ...
	I0815 00:22:41.379321   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03
	I0815 00:22:41.379346   30723 main.go:141] libmachine: (ha-863044-m03) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03 (perms=drwx------)
	I0815 00:22:41.379361   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines
	I0815 00:22:41.379386   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:22:41.379400   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088
	I0815 00:22:41.379417   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 00:22:41.379434   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home/jenkins
	I0815 00:22:41.379450   30723 main.go:141] libmachine: (ha-863044-m03) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines (perms=drwxr-xr-x)
	I0815 00:22:41.379466   30723 main.go:141] libmachine: (ha-863044-m03) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube (perms=drwxr-xr-x)
	I0815 00:22:41.379481   30723 main.go:141] libmachine: (ha-863044-m03) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088 (perms=drwxrwxr-x)
	I0815 00:22:41.379495   30723 main.go:141] libmachine: (ha-863044-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 00:22:41.379508   30723 main.go:141] libmachine: (ha-863044-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 00:22:41.379520   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home
	I0815 00:22:41.379532   30723 main.go:141] libmachine: (ha-863044-m03) Creating domain...
	I0815 00:22:41.379558   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Skipping /home - not owner
	I0815 00:22:41.380342   30723 main.go:141] libmachine: (ha-863044-m03) define libvirt domain using xml: 
	I0815 00:22:41.380365   30723 main.go:141] libmachine: (ha-863044-m03) <domain type='kvm'>
	I0815 00:22:41.380375   30723 main.go:141] libmachine: (ha-863044-m03)   <name>ha-863044-m03</name>
	I0815 00:22:41.380384   30723 main.go:141] libmachine: (ha-863044-m03)   <memory unit='MiB'>2200</memory>
	I0815 00:22:41.380393   30723 main.go:141] libmachine: (ha-863044-m03)   <vcpu>2</vcpu>
	I0815 00:22:41.380399   30723 main.go:141] libmachine: (ha-863044-m03)   <features>
	I0815 00:22:41.380408   30723 main.go:141] libmachine: (ha-863044-m03)     <acpi/>
	I0815 00:22:41.380413   30723 main.go:141] libmachine: (ha-863044-m03)     <apic/>
	I0815 00:22:41.380418   30723 main.go:141] libmachine: (ha-863044-m03)     <pae/>
	I0815 00:22:41.380426   30723 main.go:141] libmachine: (ha-863044-m03)     
	I0815 00:22:41.380436   30723 main.go:141] libmachine: (ha-863044-m03)   </features>
	I0815 00:22:41.380451   30723 main.go:141] libmachine: (ha-863044-m03)   <cpu mode='host-passthrough'>
	I0815 00:22:41.380463   30723 main.go:141] libmachine: (ha-863044-m03)   
	I0815 00:22:41.380474   30723 main.go:141] libmachine: (ha-863044-m03)   </cpu>
	I0815 00:22:41.380486   30723 main.go:141] libmachine: (ha-863044-m03)   <os>
	I0815 00:22:41.380496   30723 main.go:141] libmachine: (ha-863044-m03)     <type>hvm</type>
	I0815 00:22:41.380505   30723 main.go:141] libmachine: (ha-863044-m03)     <boot dev='cdrom'/>
	I0815 00:22:41.380515   30723 main.go:141] libmachine: (ha-863044-m03)     <boot dev='hd'/>
	I0815 00:22:41.380537   30723 main.go:141] libmachine: (ha-863044-m03)     <bootmenu enable='no'/>
	I0815 00:22:41.380548   30723 main.go:141] libmachine: (ha-863044-m03)   </os>
	I0815 00:22:41.380553   30723 main.go:141] libmachine: (ha-863044-m03)   <devices>
	I0815 00:22:41.380561   30723 main.go:141] libmachine: (ha-863044-m03)     <disk type='file' device='cdrom'>
	I0815 00:22:41.380570   30723 main.go:141] libmachine: (ha-863044-m03)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/boot2docker.iso'/>
	I0815 00:22:41.380577   30723 main.go:141] libmachine: (ha-863044-m03)       <target dev='hdc' bus='scsi'/>
	I0815 00:22:41.380583   30723 main.go:141] libmachine: (ha-863044-m03)       <readonly/>
	I0815 00:22:41.380590   30723 main.go:141] libmachine: (ha-863044-m03)     </disk>
	I0815 00:22:41.380596   30723 main.go:141] libmachine: (ha-863044-m03)     <disk type='file' device='disk'>
	I0815 00:22:41.380604   30723 main.go:141] libmachine: (ha-863044-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 00:22:41.380615   30723 main.go:141] libmachine: (ha-863044-m03)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/ha-863044-m03.rawdisk'/>
	I0815 00:22:41.380625   30723 main.go:141] libmachine: (ha-863044-m03)       <target dev='hda' bus='virtio'/>
	I0815 00:22:41.380647   30723 main.go:141] libmachine: (ha-863044-m03)     </disk>
	I0815 00:22:41.380686   30723 main.go:141] libmachine: (ha-863044-m03)     <interface type='network'>
	I0815 00:22:41.380698   30723 main.go:141] libmachine: (ha-863044-m03)       <source network='mk-ha-863044'/>
	I0815 00:22:41.380705   30723 main.go:141] libmachine: (ha-863044-m03)       <model type='virtio'/>
	I0815 00:22:41.380714   30723 main.go:141] libmachine: (ha-863044-m03)     </interface>
	I0815 00:22:41.380720   30723 main.go:141] libmachine: (ha-863044-m03)     <interface type='network'>
	I0815 00:22:41.380728   30723 main.go:141] libmachine: (ha-863044-m03)       <source network='default'/>
	I0815 00:22:41.380732   30723 main.go:141] libmachine: (ha-863044-m03)       <model type='virtio'/>
	I0815 00:22:41.380740   30723 main.go:141] libmachine: (ha-863044-m03)     </interface>
	I0815 00:22:41.380745   30723 main.go:141] libmachine: (ha-863044-m03)     <serial type='pty'>
	I0815 00:22:41.380751   30723 main.go:141] libmachine: (ha-863044-m03)       <target port='0'/>
	I0815 00:22:41.380760   30723 main.go:141] libmachine: (ha-863044-m03)     </serial>
	I0815 00:22:41.380770   30723 main.go:141] libmachine: (ha-863044-m03)     <console type='pty'>
	I0815 00:22:41.380783   30723 main.go:141] libmachine: (ha-863044-m03)       <target type='serial' port='0'/>
	I0815 00:22:41.380791   30723 main.go:141] libmachine: (ha-863044-m03)     </console>
	I0815 00:22:41.380803   30723 main.go:141] libmachine: (ha-863044-m03)     <rng model='virtio'>
	I0815 00:22:41.380814   30723 main.go:141] libmachine: (ha-863044-m03)       <backend model='random'>/dev/random</backend>
	I0815 00:22:41.380825   30723 main.go:141] libmachine: (ha-863044-m03)     </rng>
	I0815 00:22:41.380832   30723 main.go:141] libmachine: (ha-863044-m03)     
	I0815 00:22:41.380836   30723 main.go:141] libmachine: (ha-863044-m03)     
	I0815 00:22:41.380849   30723 main.go:141] libmachine: (ha-863044-m03)   </devices>
	I0815 00:22:41.380860   30723 main.go:141] libmachine: (ha-863044-m03) </domain>
	I0815 00:22:41.380871   30723 main.go:141] libmachine: (ha-863044-m03) 
	I0815 00:22:41.387469   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:a4:a0:77 in network default
	I0815 00:22:41.388017   30723 main.go:141] libmachine: (ha-863044-m03) Ensuring networks are active...
	I0815 00:22:41.388036   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:41.388766   30723 main.go:141] libmachine: (ha-863044-m03) Ensuring network default is active
	I0815 00:22:41.389100   30723 main.go:141] libmachine: (ha-863044-m03) Ensuring network mk-ha-863044 is active
	I0815 00:22:41.389419   30723 main.go:141] libmachine: (ha-863044-m03) Getting domain xml...
	I0815 00:22:41.390092   30723 main.go:141] libmachine: (ha-863044-m03) Creating domain...
	I0815 00:22:42.603059   30723 main.go:141] libmachine: (ha-863044-m03) Waiting to get IP...
	I0815 00:22:42.603812   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:42.604183   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:42.604214   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:42.604174   31483 retry.go:31] will retry after 234.358514ms: waiting for machine to come up
	I0815 00:22:42.840754   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:42.841084   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:42.841106   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:42.841048   31483 retry.go:31] will retry after 349.958791ms: waiting for machine to come up
	I0815 00:22:43.192467   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:43.192863   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:43.192890   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:43.192820   31483 retry.go:31] will retry after 358.098773ms: waiting for machine to come up
	I0815 00:22:43.552357   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:43.552797   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:43.552820   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:43.552770   31483 retry.go:31] will retry after 600.033913ms: waiting for machine to come up
	I0815 00:22:44.153805   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:44.154202   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:44.154228   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:44.154156   31483 retry.go:31] will retry after 616.990211ms: waiting for machine to come up
	I0815 00:22:44.773276   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:44.773815   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:44.773844   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:44.773763   31483 retry.go:31] will retry after 631.014269ms: waiting for machine to come up
	I0815 00:22:45.406591   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:45.407103   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:45.407129   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:45.407057   31483 retry.go:31] will retry after 1.084067737s: waiting for machine to come up
	I0815 00:22:46.493045   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:46.493493   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:46.493520   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:46.493458   31483 retry.go:31] will retry after 1.084636321s: waiting for machine to come up
	I0815 00:22:47.579722   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:47.580142   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:47.580174   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:47.580088   31483 retry.go:31] will retry after 1.283830855s: waiting for machine to come up
	I0815 00:22:48.867178   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:48.867702   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:48.867733   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:48.867654   31483 retry.go:31] will retry after 1.554254773s: waiting for machine to come up
	I0815 00:22:50.423320   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:50.423781   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:50.423808   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:50.423725   31483 retry.go:31] will retry after 1.892180005s: waiting for machine to come up
	I0815 00:22:52.317816   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:52.318256   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:52.318280   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:52.318200   31483 retry.go:31] will retry after 2.515000093s: waiting for machine to come up
	I0815 00:22:54.835775   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:54.836120   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:54.836144   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:54.836089   31483 retry.go:31] will retry after 3.437903548s: waiting for machine to come up
	I0815 00:22:58.277292   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:58.277724   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:58.277782   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:58.277716   31483 retry.go:31] will retry after 4.166628489s: waiting for machine to come up
	I0815 00:23:02.445716   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.446135   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has current primary IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.446150   30723 main.go:141] libmachine: (ha-863044-m03) Found IP for machine: 192.168.39.30
	I0815 00:23:02.446160   30723 main.go:141] libmachine: (ha-863044-m03) Reserving static IP address...
	I0815 00:23:02.446566   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find host DHCP lease matching {name: "ha-863044-m03", mac: "52:54:00:5e:df:2b", ip: "192.168.39.30"} in network mk-ha-863044
	I0815 00:23:02.520969   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Getting to WaitForSSH function...
	I0815 00:23:02.521002   30723 main.go:141] libmachine: (ha-863044-m03) Reserved static IP address: 192.168.39.30
	I0815 00:23:02.521015   30723 main.go:141] libmachine: (ha-863044-m03) Waiting for SSH to be available...
	I0815 00:23:02.523316   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.523676   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:02.523710   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.523874   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Using SSH client type: external
	I0815 00:23:02.523900   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa (-rw-------)
	I0815 00:23:02.523933   30723 main.go:141] libmachine: (ha-863044-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 00:23:02.523951   30723 main.go:141] libmachine: (ha-863044-m03) DBG | About to run SSH command:
	I0815 00:23:02.523965   30723 main.go:141] libmachine: (ha-863044-m03) DBG | exit 0
	I0815 00:23:02.644472   30723 main.go:141] libmachine: (ha-863044-m03) DBG | SSH cmd err, output: <nil>: 
	I0815 00:23:02.644771   30723 main.go:141] libmachine: (ha-863044-m03) KVM machine creation complete!
	I0815 00:23:02.645105   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetConfigRaw
	I0815 00:23:02.645586   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:02.645787   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:02.645926   30723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 00:23:02.645942   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetState
	I0815 00:23:02.647102   30723 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 00:23:02.647115   30723 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 00:23:02.647122   30723 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 00:23:02.647130   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:02.649413   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.649805   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:02.649830   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.650044   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:02.650233   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.650405   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.650535   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:02.650733   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:23:02.650939   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0815 00:23:02.650953   30723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 00:23:02.755712   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:23:02.755729   30723 main.go:141] libmachine: Detecting the provisioner...
	I0815 00:23:02.755737   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:02.758198   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.758550   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:02.758577   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.758737   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:02.758923   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.759080   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.759220   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:02.759374   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:23:02.759574   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0815 00:23:02.759588   30723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 00:23:02.860851   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 00:23:02.860922   30723 main.go:141] libmachine: found compatible host: buildroot
	I0815 00:23:02.860938   30723 main.go:141] libmachine: Provisioning with buildroot...
	I0815 00:23:02.860951   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetMachineName
	I0815 00:23:02.861185   30723 buildroot.go:166] provisioning hostname "ha-863044-m03"
	I0815 00:23:02.861207   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetMachineName
	I0815 00:23:02.861364   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:02.863861   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.864294   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:02.864314   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.864460   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:02.864632   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.864784   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.864892   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:02.865031   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:23:02.865209   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0815 00:23:02.865219   30723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863044-m03 && echo "ha-863044-m03" | sudo tee /etc/hostname
	I0815 00:23:02.977169   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863044-m03
	
	I0815 00:23:02.977194   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:02.979736   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.980092   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:02.980120   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.980281   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:02.980453   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.980588   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.980714   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:02.980875   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:23:02.981037   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0815 00:23:02.981059   30723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863044-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863044-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863044-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:23:03.088946   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:23:03.088969   30723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 00:23:03.088982   30723 buildroot.go:174] setting up certificates
	I0815 00:23:03.088990   30723 provision.go:84] configureAuth start
	I0815 00:23:03.088998   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetMachineName
	I0815 00:23:03.089290   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:23:03.092163   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.092527   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.092559   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.092709   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:03.094875   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.095171   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.095195   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.095365   30723 provision.go:143] copyHostCerts
	I0815 00:23:03.095394   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:23:03.095425   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 00:23:03.095433   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:23:03.095497   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 00:23:03.095564   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:23:03.095581   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 00:23:03.095589   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:23:03.095613   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 00:23:03.095662   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:23:03.095679   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 00:23:03.095686   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:23:03.095708   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 00:23:03.095756   30723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.ha-863044-m03 san=[127.0.0.1 192.168.39.30 ha-863044-m03 localhost minikube]
	I0815 00:23:03.155012   30723 provision.go:177] copyRemoteCerts
	I0815 00:23:03.155061   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:23:03.155083   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:03.157492   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.157819   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.157846   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.157993   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:03.158161   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.158309   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:03.158462   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:23:03.238464   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 00:23:03.238527   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:23:03.262331   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 00:23:03.262400   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 00:23:03.286135   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 00:23:03.286199   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 00:23:03.310148   30723 provision.go:87] duration metric: took 221.143534ms to configureAuth
	I0815 00:23:03.310175   30723 buildroot.go:189] setting minikube options for container-runtime
	I0815 00:23:03.310352   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:23:03.310416   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:03.312961   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.313337   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.313365   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.313513   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:03.313696   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.313882   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.314028   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:03.314215   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:23:03.314406   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0815 00:23:03.314426   30723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:23:03.577378   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:23:03.577409   30723 main.go:141] libmachine: Checking connection to Docker...
	I0815 00:23:03.577420   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetURL
	I0815 00:23:03.578583   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Using libvirt version 6000000
	I0815 00:23:03.580950   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.581334   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.581363   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.581524   30723 main.go:141] libmachine: Docker is up and running!
	I0815 00:23:03.581540   30723 main.go:141] libmachine: Reticulating splines...
	I0815 00:23:03.581548   30723 client.go:171] duration metric: took 22.538971017s to LocalClient.Create
	I0815 00:23:03.581573   30723 start.go:167] duration metric: took 22.539045128s to libmachine.API.Create "ha-863044"
	I0815 00:23:03.581584   30723 start.go:293] postStartSetup for "ha-863044-m03" (driver="kvm2")
	I0815 00:23:03.581597   30723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:23:03.581618   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:03.581839   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:23:03.581865   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:03.583908   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.584264   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.584291   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.584411   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:03.584570   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.584744   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:03.584920   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:23:03.665974   30723 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:23:03.669868   30723 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 00:23:03.669891   30723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 00:23:03.669944   30723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 00:23:03.670012   30723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 00:23:03.670021   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /etc/ssl/certs/202792.pem
	I0815 00:23:03.670098   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 00:23:03.678728   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:23:03.700112   30723 start.go:296] duration metric: took 118.515675ms for postStartSetup
	I0815 00:23:03.700152   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetConfigRaw
	I0815 00:23:03.700769   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:23:03.703245   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.703600   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.703630   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.703842   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:23:03.704015   30723 start.go:128] duration metric: took 22.679361913s to createHost
	I0815 00:23:03.704037   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:03.706285   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.706611   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.706637   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.706779   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:03.706909   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.707039   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.707139   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:03.707282   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:23:03.707441   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0815 00:23:03.707452   30723 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 00:23:03.804906   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723681383.766307938
	
	I0815 00:23:03.804927   30723 fix.go:216] guest clock: 1723681383.766307938
	I0815 00:23:03.804935   30723 fix.go:229] Guest: 2024-08-15 00:23:03.766307938 +0000 UTC Remote: 2024-08-15 00:23:03.704024469 +0000 UTC m=+145.856173876 (delta=62.283469ms)
	I0815 00:23:03.804950   30723 fix.go:200] guest clock delta is within tolerance: 62.283469ms
	I0815 00:23:03.804954   30723 start.go:83] releasing machines lock for "ha-863044-m03", held for 22.780400611s
	I0815 00:23:03.804971   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:03.805256   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:23:03.807665   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.808040   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.808058   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.810229   30723 out.go:177] * Found network options:
	I0815 00:23:03.811510   30723 out.go:177]   - NO_PROXY=192.168.39.6,192.168.39.170
	W0815 00:23:03.812593   30723 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 00:23:03.812609   30723 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 00:23:03.812619   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:03.813209   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:03.813379   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:03.813465   30723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:23:03.813510   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	W0815 00:23:03.813541   30723 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 00:23:03.813564   30723 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 00:23:03.813630   30723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:23:03.813648   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:03.816313   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.816445   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.816698   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.816723   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.816768   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.816796   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.816872   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:03.817049   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.817073   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:03.817207   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.817208   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:03.817370   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:23:03.817399   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:03.817532   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:23:04.045451   30723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 00:23:04.051702   30723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 00:23:04.051766   30723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:23:04.067872   30723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 00:23:04.067891   30723 start.go:495] detecting cgroup driver to use...
	I0815 00:23:04.067952   30723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:23:04.083179   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:23:04.095780   30723 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:23:04.095834   30723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:23:04.108241   30723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:23:04.121145   30723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:23:04.242613   30723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:23:04.399000   30723 docker.go:233] disabling docker service ...
	I0815 00:23:04.399082   30723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:23:04.413030   30723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:23:04.424872   30723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:23:04.534438   30723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:23:04.641008   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:23:04.654571   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:23:04.671767   30723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:23:04.671847   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.681525   30723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:23:04.681592   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.691399   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.702111   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.711792   30723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:23:04.721433   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.730986   30723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.749433   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.760129   30723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:23:04.769285   30723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 00:23:04.769348   30723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 00:23:04.782190   30723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:23:04.791844   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:23:04.899751   30723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:23:05.032342   30723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:23:05.032429   30723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:23:05.036908   30723 start.go:563] Will wait 60s for crictl version
	I0815 00:23:05.036962   30723 ssh_runner.go:195] Run: which crictl
	I0815 00:23:05.040405   30723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:23:05.082663   30723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 00:23:05.082730   30723 ssh_runner.go:195] Run: crio --version
	I0815 00:23:05.112643   30723 ssh_runner.go:195] Run: crio --version
	I0815 00:23:05.141341   30723 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 00:23:05.142668   30723 out.go:177]   - env NO_PROXY=192.168.39.6
	I0815 00:23:05.143850   30723 out.go:177]   - env NO_PROXY=192.168.39.6,192.168.39.170
	I0815 00:23:05.144851   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:23:05.147297   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:05.147618   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:05.147654   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:05.147836   30723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 00:23:05.151706   30723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:23:05.163415   30723 mustload.go:65] Loading cluster: ha-863044
	I0815 00:23:05.163668   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:23:05.163947   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:23:05.163995   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:23:05.180222   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43517
	I0815 00:23:05.180631   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:23:05.181091   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:23:05.181112   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:23:05.181430   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:23:05.181634   30723 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:23:05.183073   30723 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:23:05.183408   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:23:05.183440   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:23:05.198183   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35343
	I0815 00:23:05.198572   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:23:05.199070   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:23:05.199094   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:23:05.199409   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:23:05.199593   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:23:05.199723   30723 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044 for IP: 192.168.39.30
	I0815 00:23:05.199734   30723 certs.go:194] generating shared ca certs ...
	I0815 00:23:05.199747   30723 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:23:05.199856   30723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 00:23:05.199892   30723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 00:23:05.199900   30723 certs.go:256] generating profile certs ...
	I0815 00:23:05.199962   30723 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key
	I0815 00:23:05.199986   30723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.fb5d4460
	I0815 00:23:05.200002   30723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.fb5d4460 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.170 192.168.39.30 192.168.39.254]
	I0815 00:23:05.294220   30723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.fb5d4460 ...
	I0815 00:23:05.294249   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.fb5d4460: {Name:mk0950b6d97069d8aa367779aabd7a73d7c2423e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:23:05.294422   30723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.fb5d4460 ...
	I0815 00:23:05.294434   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.fb5d4460: {Name:mka467de40a002e45b894a979d221dbb7b5a2008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:23:05.294503   30723 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.fb5d4460 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt
	I0815 00:23:05.294634   30723 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.fb5d4460 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key
	I0815 00:23:05.294829   30723 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key
	I0815 00:23:05.294850   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 00:23:05.294880   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 00:23:05.294894   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 00:23:05.294906   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 00:23:05.294918   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 00:23:05.294931   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 00:23:05.294943   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 00:23:05.294953   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 00:23:05.295019   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 00:23:05.295049   30723 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 00:23:05.295059   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:23:05.295079   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:23:05.295100   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:23:05.295123   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 00:23:05.295168   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:23:05.295193   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem -> /usr/share/ca-certificates/20279.pem
	I0815 00:23:05.295205   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /usr/share/ca-certificates/202792.pem
	I0815 00:23:05.295215   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:23:05.295244   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:23:05.298013   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:23:05.298361   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:23:05.298386   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:23:05.298543   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:23:05.298708   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:23:05.298874   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:23:05.298992   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:23:05.376939   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0815 00:23:05.381995   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 00:23:05.393346   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0815 00:23:05.397855   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0815 00:23:05.408041   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 00:23:05.411683   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 00:23:05.420961   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0815 00:23:05.424606   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 00:23:05.433785   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0815 00:23:05.437463   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 00:23:05.446772   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0815 00:23:05.450349   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0815 00:23:05.460013   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:23:05.483481   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:23:05.505263   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:23:05.526935   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:23:05.549754   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0815 00:23:05.571986   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:23:05.603518   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:23:05.625444   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 00:23:05.647240   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 00:23:05.669239   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 00:23:05.690391   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:23:05.713453   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 00:23:05.728875   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0815 00:23:05.744592   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 00:23:05.759747   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 00:23:05.774921   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 00:23:05.789979   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0815 00:23:05.805061   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 00:23:05.821915   30723 ssh_runner.go:195] Run: openssl version
	I0815 00:23:05.827613   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 00:23:05.840340   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 00:23:05.844450   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 00:23:05.844499   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 00:23:05.850019   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 00:23:05.861182   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:23:05.872496   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:23:05.876597   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:23:05.876644   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:23:05.881951   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:23:05.893309   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 00:23:05.903270   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 00:23:05.907051   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 00:23:05.907098   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 00:23:05.912365   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 00:23:05.924899   30723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:23:05.928787   30723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:23:05.928833   30723 kubeadm.go:934] updating node {m03 192.168.39.30 8443 v1.31.0 crio true true} ...
	I0815 00:23:05.928904   30723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863044-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:23:05.928929   30723 kube-vip.go:115] generating kube-vip config ...
	I0815 00:23:05.928957   30723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 00:23:05.945776   30723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 00:23:05.945826   30723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 00:23:05.945869   30723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:23:05.954537   30723 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0815 00:23:05.954590   30723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0815 00:23:05.963254   30723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0815 00:23:05.963279   30723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0815 00:23:05.963297   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 00:23:05.963283   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 00:23:05.963254   30723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0815 00:23:05.963372   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 00:23:05.963407   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:23:05.963430   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 00:23:05.971891   30723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0815 00:23:05.971920   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0815 00:23:05.982514   30723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0815 00:23:05.982547   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0815 00:23:05.982523   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 00:23:05.982664   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 00:23:06.032819   30723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0815 00:23:06.032865   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0815 00:23:06.771496   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 00:23:06.780671   30723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0815 00:23:06.797055   30723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:23:06.814947   30723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 00:23:06.832182   30723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 00:23:06.835880   30723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:23:06.848417   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:23:06.971574   30723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:23:06.989270   30723 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:23:06.989750   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:23:06.989797   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:23:07.004926   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44629
	I0815 00:23:07.005394   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:23:07.005901   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:23:07.005925   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:23:07.006221   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:23:07.006420   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:23:07.006531   30723 start.go:317] joinCluster: &{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:23:07.006707   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0815 00:23:07.006729   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:23:07.009661   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:23:07.010105   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:23:07.010128   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:23:07.010269   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:23:07.010428   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:23:07.010593   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:23:07.010745   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:23:07.159544   30723 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:23:07.159590   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z8bf15.z1raht0f1z3edyo5 --discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863044-m03 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443"
	I0815 00:23:28.183777   30723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z8bf15.z1raht0f1z3edyo5 --discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863044-m03 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443": (21.024162503s)
	I0815 00:23:28.183819   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0815 00:23:28.752616   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-863044-m03 minikube.k8s.io/updated_at=2024_08_15T00_23_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=ha-863044 minikube.k8s.io/primary=false
	I0815 00:23:28.868400   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-863044-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0815 00:23:28.986224   30723 start.go:319] duration metric: took 21.979685924s to joinCluster
	I0815 00:23:28.986308   30723 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:23:28.986655   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:23:28.987863   30723 out.go:177] * Verifying Kubernetes components...
	I0815 00:23:28.989030   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:23:29.239801   30723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:23:29.261020   30723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:23:29.261366   30723 kapi.go:59] client config for ha-863044: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.crt", KeyFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key", CAFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 00:23:29.261442   30723 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.6:8443
	I0815 00:23:29.261706   30723 node_ready.go:35] waiting up to 6m0s for node "ha-863044-m03" to be "Ready" ...
	I0815 00:23:29.261790   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:29.261803   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:29.261814   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:29.261819   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:29.265217   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:29.762201   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:29.762221   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:29.762267   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:29.762275   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:29.765605   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:30.262850   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:30.262876   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:30.262887   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:30.262893   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:30.266850   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:30.762218   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:30.762244   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:30.762256   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:30.762264   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:30.765387   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:31.261951   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:31.261972   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:31.261979   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:31.261983   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:31.264871   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:31.265391   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:31.762374   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:31.762395   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:31.762403   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:31.762407   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:31.765551   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:32.262782   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:32.262804   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:32.262814   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:32.262821   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:32.266272   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:32.761957   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:32.761980   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:32.761990   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:32.761996   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:32.765626   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:33.262203   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:33.262227   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:33.262236   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:33.262240   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:33.265402   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:33.265989   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:33.762294   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:33.762320   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:33.762331   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:33.762337   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:33.765600   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:34.262715   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:34.262742   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:34.262754   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:34.262760   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:34.266416   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:34.762377   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:34.762401   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:34.762409   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:34.762415   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:34.765678   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:35.262118   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:35.262139   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:35.262149   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:35.262153   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:35.265175   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:35.762531   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:35.762558   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:35.762569   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:35.762574   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:35.766589   30723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 00:23:35.767121   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:36.262355   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:36.262381   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:36.262392   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:36.262399   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:36.265426   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:36.762241   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:36.762267   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:36.762275   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:36.762278   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:36.765463   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:37.262753   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:37.262774   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:37.262782   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:37.262788   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:37.265905   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:37.761868   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:37.761896   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:37.761915   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:37.761921   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:37.764397   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:38.261984   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:38.262005   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:38.262013   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:38.262018   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:38.265252   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:38.265721   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:38.762095   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:38.762116   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:38.762125   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:38.762128   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:38.765257   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:39.262271   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:39.262292   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:39.262300   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:39.262304   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:39.265431   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:39.762336   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:39.762356   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:39.762365   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:39.762369   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:39.765460   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:40.261997   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:40.262021   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:40.262032   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:40.262037   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:40.265626   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:40.266146   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:40.761914   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:40.761940   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:40.761948   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:40.761953   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:40.765018   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:41.262822   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:41.262843   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:41.262850   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:41.262857   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:41.266341   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:41.762252   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:41.762273   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:41.762281   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:41.762285   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:41.765201   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:42.262441   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:42.262462   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:42.262470   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:42.262474   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:42.266072   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:42.266714   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:42.762042   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:42.762064   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:42.762071   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:42.762075   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:42.764954   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:43.262497   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:43.262517   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:43.262526   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:43.262531   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:43.265650   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:43.762580   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:43.762600   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:43.762607   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:43.762612   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:43.765535   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:44.261983   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:44.262004   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:44.262011   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:44.262016   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:44.265367   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:44.762525   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:44.762549   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:44.762560   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:44.762566   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:44.765739   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:44.766328   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:45.262307   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:45.262328   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:45.262335   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:45.262339   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:45.265414   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:45.762870   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:45.762903   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:45.762911   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:45.762915   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:45.765898   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:46.262664   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:46.262686   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.262697   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.262703   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.267191   30723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 00:23:46.762403   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:46.762425   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.762433   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.762436   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.766020   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:46.766591   30723 node_ready.go:49] node "ha-863044-m03" has status "Ready":"True"
	I0815 00:23:46.766614   30723 node_ready.go:38] duration metric: took 17.504893196s for node "ha-863044-m03" to be "Ready" ...
	I0815 00:23:46.766621   30723 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:23:46.766675   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:23:46.766685   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.766692   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.766696   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.771757   30723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 00:23:46.778225   30723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-bc2jh" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.778300   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-bc2jh
	I0815 00:23:46.778310   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.778317   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.778320   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.780721   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:46.781337   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:46.781351   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.781358   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.781363   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.783502   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:46.784055   30723 pod_ready.go:92] pod "coredns-6f6b679f8f-bc2jh" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:46.784074   30723 pod_ready.go:81] duration metric: took 5.82559ms for pod "coredns-6f6b679f8f-bc2jh" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.784082   30723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-jxpqd" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.784134   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-jxpqd
	I0815 00:23:46.784143   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.784150   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.784159   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.786322   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:46.786834   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:46.786848   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.786855   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.786859   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.788908   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:46.789381   30723 pod_ready.go:92] pod "coredns-6f6b679f8f-jxpqd" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:46.789399   30723 pod_ready.go:81] duration metric: took 5.309653ms for pod "coredns-6f6b679f8f-jxpqd" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.789410   30723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.789460   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863044
	I0815 00:23:46.789471   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.789481   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.789490   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.791392   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:23:46.791995   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:46.792013   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.792024   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.792032   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.794092   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:46.794448   30723 pod_ready.go:92] pod "etcd-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:46.794464   30723 pod_ready.go:81] duration metric: took 5.043831ms for pod "etcd-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.794471   30723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.794507   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863044-m02
	I0815 00:23:46.794515   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.794520   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.794523   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.796416   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:23:46.796941   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:46.796957   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.796963   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.796968   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.798918   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:23:46.799280   30723 pod_ready.go:92] pod "etcd-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:46.799297   30723 pod_ready.go:81] duration metric: took 4.820222ms for pod "etcd-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.799306   30723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.963197   30723 request.go:632] Waited for 163.828732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863044-m03
	I0815 00:23:46.963262   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863044-m03
	I0815 00:23:46.963268   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.963275   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.963287   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.966247   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:47.163274   30723 request.go:632] Waited for 196.370188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:47.163343   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:47.163351   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:47.163364   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:47.163375   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:47.165860   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:47.166257   30723 pod_ready.go:92] pod "etcd-ha-863044-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:47.166277   30723 pod_ready.go:81] duration metric: took 366.963774ms for pod "etcd-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:47.166297   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:47.362470   30723 request.go:632] Waited for 196.093871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044
	I0815 00:23:47.362528   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044
	I0815 00:23:47.362535   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:47.362545   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:47.362554   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:47.365637   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:47.562825   30723 request.go:632] Waited for 196.401068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:47.562896   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:47.562901   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:47.562909   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:47.562913   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:47.565976   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:47.566634   30723 pod_ready.go:92] pod "kube-apiserver-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:47.566656   30723 pod_ready.go:81] duration metric: took 400.351897ms for pod "kube-apiserver-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:47.566669   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:47.762640   30723 request.go:632] Waited for 195.898128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044-m02
	I0815 00:23:47.762727   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044-m02
	I0815 00:23:47.762740   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:47.762751   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:47.762761   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:47.766059   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:47.963284   30723 request.go:632] Waited for 196.310541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:47.963366   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:47.963376   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:47.963386   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:47.963392   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:47.966509   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:47.967134   30723 pod_ready.go:92] pod "kube-apiserver-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:47.967151   30723 pod_ready.go:81] duration metric: took 400.470846ms for pod "kube-apiserver-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:47.967163   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:48.162740   30723 request.go:632] Waited for 195.501179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044-m03
	I0815 00:23:48.162820   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044-m03
	I0815 00:23:48.162830   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:48.162837   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:48.162841   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:48.165747   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:48.362867   30723 request.go:632] Waited for 196.34759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:48.362917   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:48.362923   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:48.362930   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:48.362936   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:48.366134   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:48.366679   30723 pod_ready.go:92] pod "kube-apiserver-ha-863044-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:48.366696   30723 pod_ready.go:81] duration metric: took 399.526483ms for pod "kube-apiserver-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:48.366713   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:48.562854   30723 request.go:632] Waited for 196.063266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044
	I0815 00:23:48.562903   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044
	I0815 00:23:48.562908   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:48.562916   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:48.562920   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:48.566154   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:48.763311   30723 request.go:632] Waited for 196.366786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:48.763407   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:48.763418   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:48.763429   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:48.763440   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:48.766790   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:48.767433   30723 pod_ready.go:92] pod "kube-controller-manager-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:48.767451   30723 pod_ready.go:81] duration metric: took 400.728441ms for pod "kube-controller-manager-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:48.767463   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:48.962407   30723 request.go:632] Waited for 194.882466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044-m02
	I0815 00:23:48.962482   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044-m02
	I0815 00:23:48.962487   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:48.962495   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:48.962502   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:48.965861   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:49.163177   30723 request.go:632] Waited for 196.351167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:49.163230   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:49.163236   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:49.163249   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:49.163270   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:49.166571   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:49.167130   30723 pod_ready.go:92] pod "kube-controller-manager-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:49.167148   30723 pod_ready.go:81] duration metric: took 399.677131ms for pod "kube-controller-manager-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:49.167159   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:49.363305   30723 request.go:632] Waited for 196.076477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044-m03
	I0815 00:23:49.363369   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044-m03
	I0815 00:23:49.363375   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:49.363383   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:49.363389   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:49.366479   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:49.562403   30723 request.go:632] Waited for 195.275827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:49.562477   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:49.562482   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:49.562490   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:49.562494   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:49.565661   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:49.566321   30723 pod_ready.go:92] pod "kube-controller-manager-ha-863044-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:49.566354   30723 pod_ready.go:81] duration metric: took 399.187513ms for pod "kube-controller-manager-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:49.566367   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6l4gp" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:49.763450   30723 request.go:632] Waited for 197.012223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6l4gp
	I0815 00:23:49.763536   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6l4gp
	I0815 00:23:49.763548   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:49.763559   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:49.763565   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:49.766755   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:49.962813   30723 request.go:632] Waited for 195.352835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:49.962880   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:49.962888   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:49.962901   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:49.962913   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:49.974265   30723 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0815 00:23:49.974888   30723 pod_ready.go:92] pod "kube-proxy-6l4gp" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:49.974915   30723 pod_ready.go:81] duration metric: took 408.539871ms for pod "kube-proxy-6l4gp" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:49.974929   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-758vr" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:50.162858   30723 request.go:632] Waited for 187.863713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-758vr
	I0815 00:23:50.162906   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-758vr
	I0815 00:23:50.162911   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:50.162918   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:50.162923   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:50.166036   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:50.362476   30723 request.go:632] Waited for 195.661693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:50.362524   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:50.362529   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:50.362536   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:50.362540   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:50.365821   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:50.366491   30723 pod_ready.go:92] pod "kube-proxy-758vr" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:50.366509   30723 pod_ready.go:81] duration metric: took 391.573753ms for pod "kube-proxy-758vr" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:50.366517   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qxmqn" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:50.563085   30723 request.go:632] Waited for 196.511211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxmqn
	I0815 00:23:50.563153   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxmqn
	I0815 00:23:50.563159   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:50.563167   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:50.563170   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:50.566786   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:50.762900   30723 request.go:632] Waited for 195.341406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:50.762963   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:50.762971   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:50.762983   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:50.762994   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:50.766297   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:50.766778   30723 pod_ready.go:92] pod "kube-proxy-qxmqn" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:50.766797   30723 pod_ready.go:81] duration metric: took 400.271262ms for pod "kube-proxy-qxmqn" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:50.766806   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:50.962948   30723 request.go:632] Waited for 196.051355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044
	I0815 00:23:50.963021   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044
	I0815 00:23:50.963029   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:50.963040   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:50.963047   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:50.966182   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:51.162480   30723 request.go:632] Waited for 195.656633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:51.162530   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:51.162535   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:51.162543   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:51.162548   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:51.165107   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:51.165684   30723 pod_ready.go:92] pod "kube-scheduler-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:51.165707   30723 pod_ready.go:81] duration metric: took 398.894169ms for pod "kube-scheduler-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:51.165718   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:51.362730   30723 request.go:632] Waited for 196.932795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044-m02
	I0815 00:23:51.362783   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044-m02
	I0815 00:23:51.362788   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:51.362796   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:51.362799   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:51.366771   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:51.562777   30723 request.go:632] Waited for 195.362919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:51.562881   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:51.562891   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:51.562898   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:51.562904   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:51.565998   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:51.566525   30723 pod_ready.go:92] pod "kube-scheduler-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:51.566541   30723 pod_ready.go:81] duration metric: took 400.815114ms for pod "kube-scheduler-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:51.566553   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:51.762645   30723 request.go:632] Waited for 196.027971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044-m03
	I0815 00:23:51.762711   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044-m03
	I0815 00:23:51.762717   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:51.762725   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:51.762732   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:51.765743   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:51.963320   30723 request.go:632] Waited for 196.731498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:51.963409   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:51.963418   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:51.963429   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:51.963438   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:51.966817   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:51.967345   30723 pod_ready.go:92] pod "kube-scheduler-ha-863044-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:51.967368   30723 pod_ready.go:81] duration metric: took 400.803731ms for pod "kube-scheduler-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:51.967381   30723 pod_ready.go:38] duration metric: took 5.200749366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:23:51.967402   30723 api_server.go:52] waiting for apiserver process to appear ...
	I0815 00:23:51.967464   30723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:23:51.984625   30723 api_server.go:72] duration metric: took 22.998247596s to wait for apiserver process to appear ...
	I0815 00:23:51.984647   30723 api_server.go:88] waiting for apiserver healthz status ...
	I0815 00:23:51.984678   30723 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0815 00:23:51.988572   30723 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0815 00:23:51.988643   30723 round_trippers.go:463] GET https://192.168.39.6:8443/version
	I0815 00:23:51.988671   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:51.988683   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:51.988692   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:51.989499   30723 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 00:23:51.989551   30723 api_server.go:141] control plane version: v1.31.0
	I0815 00:23:51.989563   30723 api_server.go:131] duration metric: took 4.900846ms to wait for apiserver health ...
	I0815 00:23:51.989572   30723 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 00:23:52.163222   30723 request.go:632] Waited for 173.57961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:23:52.163285   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:23:52.163290   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:52.163298   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:52.163305   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:52.168452   30723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 00:23:52.174557   30723 system_pods.go:59] 24 kube-system pods found
	I0815 00:23:52.174584   30723 system_pods.go:61] "coredns-6f6b679f8f-bc2jh" [77760785-a989-4c45-a8e0-e758db3a252b] Running
	I0815 00:23:52.174589   30723 system_pods.go:61] "coredns-6f6b679f8f-jxpqd" [72e46071-4563-4c8c-a269-c32c4d0fced3] Running
	I0815 00:23:52.174592   30723 system_pods.go:61] "etcd-ha-863044" [e41d94d6-4a69-49a3-93bc-d726a95b08b2] Running
	I0815 00:23:52.174595   30723 system_pods.go:61] "etcd-ha-863044-m02" [1c022b82-287f-493c-89ff-3aa70264c39a] Running
	I0815 00:23:52.174598   30723 system_pods.go:61] "etcd-ha-863044-m03" [774efb6d-9c64-4d80-8bc0-54a8ee452346] Running
	I0815 00:23:52.174601   30723 system_pods.go:61] "kindnet-jdl2d" [f621eec7-2d0e-4f1f-83f3-7bc5a1322693] Running
	I0815 00:23:52.174603   30723 system_pods.go:61] "kindnet-ptbpb" [b1fee332-fbc7-4b7b-818a-9ba398dce43e] Running
	I0815 00:23:52.174606   30723 system_pods.go:61] "kindnet-xpnzd" [6cd2a4c8-3c5f-4860-90bb-23a8c6f72a15] Running
	I0815 00:23:52.174608   30723 system_pods.go:61] "kube-apiserver-ha-863044" [52bc4344-75cb-4659-a1df-db580ad5d026] Running
	I0815 00:23:52.174611   30723 system_pods.go:61] "kube-apiserver-ha-863044-m02" [087ef288-843d-44fc-9c5b-1b302f6d2906] Running
	I0815 00:23:52.174614   30723 system_pods.go:61] "kube-apiserver-ha-863044-m03" [aea4dcdd-c0d6-44d8-a02d-881b92de68d3] Running
	I0815 00:23:52.174617   30723 system_pods.go:61] "kube-controller-manager-ha-863044" [4539aebc-86af-4e9f-8736-348d90f3981d] Running
	I0815 00:23:52.174620   30723 system_pods.go:61] "kube-controller-manager-ha-863044-m02" [a0c27335-3bc0-4a2e-9875-0c736b47a4b1] Running
	I0815 00:23:52.174624   30723 system_pods.go:61] "kube-controller-manager-ha-863044-m03" [0ece8182-3a99-4f02-8ef7-d8ddbe2edf98] Running
	I0815 00:23:52.174628   30723 system_pods.go:61] "kube-proxy-6l4gp" [85ddf43f-82b7-4325-a5d8-d4f2242b4e7c] Running
	I0815 00:23:52.174634   30723 system_pods.go:61] "kube-proxy-758vr" [0963208c-92ef-4625-8805-1c8ad8ae7b51] Running
	I0815 00:23:52.174636   30723 system_pods.go:61] "kube-proxy-qxmqn" [c40bb19e-c0bd-43fb-bbfc-3c9dfcd2fbad] Running
	I0815 00:23:52.174640   30723 system_pods.go:61] "kube-scheduler-ha-863044" [84013745-813a-4eab-a9a5-6edd28301611] Running
	I0815 00:23:52.174642   30723 system_pods.go:61] "kube-scheduler-ha-863044-m02" [62650272-5fa7-4ff2-83b5-6cb6f84d497b] Running
	I0815 00:23:52.174645   30723 system_pods.go:61] "kube-scheduler-ha-863044-m03" [a5dad54e-959c-4bb1-ab47-9c952dac9926] Running
	I0815 00:23:52.174648   30723 system_pods.go:61] "kube-vip-ha-863044" [ff875a81-1ee8-4073-a666-4f9dc4239e38] Running
	I0815 00:23:52.174651   30723 system_pods.go:61] "kube-vip-ha-863044-m02" [e9f868e0-44af-4e2b-8699-a88d1a752594] Running
	I0815 00:23:52.174654   30723 system_pods.go:61] "kube-vip-ha-863044-m03" [b66363f1-db60-4f4b-8525-2d4c5366ceb4] Running
	I0815 00:23:52.174656   30723 system_pods.go:61] "storage-provisioner" [a7565569-2f8c-4393-b4f8-b8548d65f794] Running
	I0815 00:23:52.174662   30723 system_pods.go:74] duration metric: took 185.083199ms to wait for pod list to return data ...
	I0815 00:23:52.174672   30723 default_sa.go:34] waiting for default service account to be created ...
	I0815 00:23:52.363097   30723 request.go:632] Waited for 188.345607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0815 00:23:52.363164   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0815 00:23:52.363176   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:52.363187   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:52.363197   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:52.366585   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:52.366696   30723 default_sa.go:45] found service account: "default"
	I0815 00:23:52.366711   30723 default_sa.go:55] duration metric: took 192.033273ms for default service account to be created ...
	I0815 00:23:52.366718   30723 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 00:23:52.563133   30723 request.go:632] Waited for 196.356112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:23:52.563221   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:23:52.563232   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:52.563244   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:52.563251   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:52.568835   30723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 00:23:52.576417   30723 system_pods.go:86] 24 kube-system pods found
	I0815 00:23:52.576442   30723 system_pods.go:89] "coredns-6f6b679f8f-bc2jh" [77760785-a989-4c45-a8e0-e758db3a252b] Running
	I0815 00:23:52.576448   30723 system_pods.go:89] "coredns-6f6b679f8f-jxpqd" [72e46071-4563-4c8c-a269-c32c4d0fced3] Running
	I0815 00:23:52.576453   30723 system_pods.go:89] "etcd-ha-863044" [e41d94d6-4a69-49a3-93bc-d726a95b08b2] Running
	I0815 00:23:52.576457   30723 system_pods.go:89] "etcd-ha-863044-m02" [1c022b82-287f-493c-89ff-3aa70264c39a] Running
	I0815 00:23:52.576461   30723 system_pods.go:89] "etcd-ha-863044-m03" [774efb6d-9c64-4d80-8bc0-54a8ee452346] Running
	I0815 00:23:52.576464   30723 system_pods.go:89] "kindnet-jdl2d" [f621eec7-2d0e-4f1f-83f3-7bc5a1322693] Running
	I0815 00:23:52.576468   30723 system_pods.go:89] "kindnet-ptbpb" [b1fee332-fbc7-4b7b-818a-9ba398dce43e] Running
	I0815 00:23:52.576472   30723 system_pods.go:89] "kindnet-xpnzd" [6cd2a4c8-3c5f-4860-90bb-23a8c6f72a15] Running
	I0815 00:23:52.576476   30723 system_pods.go:89] "kube-apiserver-ha-863044" [52bc4344-75cb-4659-a1df-db580ad5d026] Running
	I0815 00:23:52.576481   30723 system_pods.go:89] "kube-apiserver-ha-863044-m02" [087ef288-843d-44fc-9c5b-1b302f6d2906] Running
	I0815 00:23:52.576486   30723 system_pods.go:89] "kube-apiserver-ha-863044-m03" [aea4dcdd-c0d6-44d8-a02d-881b92de68d3] Running
	I0815 00:23:52.576490   30723 system_pods.go:89] "kube-controller-manager-ha-863044" [4539aebc-86af-4e9f-8736-348d90f3981d] Running
	I0815 00:23:52.576498   30723 system_pods.go:89] "kube-controller-manager-ha-863044-m02" [a0c27335-3bc0-4a2e-9875-0c736b47a4b1] Running
	I0815 00:23:52.576503   30723 system_pods.go:89] "kube-controller-manager-ha-863044-m03" [0ece8182-3a99-4f02-8ef7-d8ddbe2edf98] Running
	I0815 00:23:52.576509   30723 system_pods.go:89] "kube-proxy-6l4gp" [85ddf43f-82b7-4325-a5d8-d4f2242b4e7c] Running
	I0815 00:23:52.576513   30723 system_pods.go:89] "kube-proxy-758vr" [0963208c-92ef-4625-8805-1c8ad8ae7b51] Running
	I0815 00:23:52.576517   30723 system_pods.go:89] "kube-proxy-qxmqn" [c40bb19e-c0bd-43fb-bbfc-3c9dfcd2fbad] Running
	I0815 00:23:52.576522   30723 system_pods.go:89] "kube-scheduler-ha-863044" [84013745-813a-4eab-a9a5-6edd28301611] Running
	I0815 00:23:52.576526   30723 system_pods.go:89] "kube-scheduler-ha-863044-m02" [62650272-5fa7-4ff2-83b5-6cb6f84d497b] Running
	I0815 00:23:52.576531   30723 system_pods.go:89] "kube-scheduler-ha-863044-m03" [a5dad54e-959c-4bb1-ab47-9c952dac9926] Running
	I0815 00:23:52.576535   30723 system_pods.go:89] "kube-vip-ha-863044" [ff875a81-1ee8-4073-a666-4f9dc4239e38] Running
	I0815 00:23:52.576539   30723 system_pods.go:89] "kube-vip-ha-863044-m02" [e9f868e0-44af-4e2b-8699-a88d1a752594] Running
	I0815 00:23:52.576544   30723 system_pods.go:89] "kube-vip-ha-863044-m03" [b66363f1-db60-4f4b-8525-2d4c5366ceb4] Running
	I0815 00:23:52.576547   30723 system_pods.go:89] "storage-provisioner" [a7565569-2f8c-4393-b4f8-b8548d65f794] Running
	I0815 00:23:52.576553   30723 system_pods.go:126] duration metric: took 209.829403ms to wait for k8s-apps to be running ...
	I0815 00:23:52.576562   30723 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 00:23:52.576603   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:23:52.593088   30723 system_svc.go:56] duration metric: took 16.516305ms WaitForService to wait for kubelet
	I0815 00:23:52.593116   30723 kubeadm.go:582] duration metric: took 23.606742835s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:23:52.593134   30723 node_conditions.go:102] verifying NodePressure condition ...
	I0815 00:23:52.762489   30723 request.go:632] Waited for 169.272948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes
	I0815 00:23:52.762543   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0815 00:23:52.762548   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:52.762556   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:52.762559   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:52.766816   30723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 00:23:52.768109   30723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 00:23:52.768129   30723 node_conditions.go:123] node cpu capacity is 2
	I0815 00:23:52.768140   30723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 00:23:52.768146   30723 node_conditions.go:123] node cpu capacity is 2
	I0815 00:23:52.768151   30723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 00:23:52.768157   30723 node_conditions.go:123] node cpu capacity is 2
	I0815 00:23:52.768163   30723 node_conditions.go:105] duration metric: took 175.024259ms to run NodePressure ...
	I0815 00:23:52.768183   30723 start.go:241] waiting for startup goroutines ...
	I0815 00:23:52.768213   30723 start.go:255] writing updated cluster config ...
	I0815 00:23:52.768483   30723 ssh_runner.go:195] Run: rm -f paused
	I0815 00:23:52.817091   30723 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 00:23:52.818943   30723 out.go:177] * Done! kubectl is now configured to use "ha-863044" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 00:27:33 ha-863044 crio[681]: time="2024-08-15 00:27:33.972844835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681653972814349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=050abcc0-f706-42bf-96d6-424f467fefee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:27:33 ha-863044 crio[681]: time="2024-08-15 00:27:33.973381088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7d70208-b3e6-4f78-9583-25be96457fbd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:27:33 ha-863044 crio[681]: time="2024-08-15 00:27:33.973467171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7d70208-b3e6-4f78-9583-25be96457fbd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:27:33 ha-863044 crio[681]: time="2024-08-15 00:27:33.973997838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681438171701468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c05051caebc6b89e60379c49e52352cbd01e34ef4efe6f58a5441cb275e051d,PodSandboxId:e6e8146f29bde538c7ae23bcea4317033e3c3f8902a557af46925d5710c262bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723681299723187197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299671457880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299673848624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a9
89-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723681287926625552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172368128
4364979513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67611ae45f1e5eeda73fa4909e4ae85ff1de3ce19a810bf0cb7140feb5211759,PodSandboxId:77e4316165593ea75a453c19c9fddf5203bfd45898f21e49c9fc9b83d291e22d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172368127617
1198759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e65923f5ca343c7ad1958ac0690ea3f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9038fb04ce7173166cb52181ceecd41cf82d733826ddf68ed5f5eb8894457506,PodSandboxId:a1cf7b7ef6f41616b120adf62166fb018ce255bc7069e3e0fda6f2086db0fa45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723681273710128815,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681273656731817,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681273612612251,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edee09d480aed745af29289f4e354836948af49f83b51332c70381c2589a7b70,PodSandboxId:e430c0bc26b2557fa2ba39cf57c7729ce11889df4d2da1c10d04e7f56489f12e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681273588332289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7d70208-b3e6-4f78-9583-25be96457fbd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.013267339Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8aed801-43de-47c5-bf82-71d211c754fa name=/runtime.v1.RuntimeService/Version
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.013356021Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8aed801-43de-47c5-bf82-71d211c754fa name=/runtime.v1.RuntimeService/Version
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.014682844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca11e0d7-785d-497d-8a9d-0385dfc3b9da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.015180890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681654015159612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca11e0d7-785d-497d-8a9d-0385dfc3b9da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.015758032Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21803305-6713-4273-b1b2-cf7a3713e82d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.015823143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21803305-6713-4273-b1b2-cf7a3713e82d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.016119354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681438171701468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c05051caebc6b89e60379c49e52352cbd01e34ef4efe6f58a5441cb275e051d,PodSandboxId:e6e8146f29bde538c7ae23bcea4317033e3c3f8902a557af46925d5710c262bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723681299723187197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299671457880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299673848624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a9
89-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723681287926625552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172368128
4364979513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67611ae45f1e5eeda73fa4909e4ae85ff1de3ce19a810bf0cb7140feb5211759,PodSandboxId:77e4316165593ea75a453c19c9fddf5203bfd45898f21e49c9fc9b83d291e22d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172368127617
1198759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e65923f5ca343c7ad1958ac0690ea3f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9038fb04ce7173166cb52181ceecd41cf82d733826ddf68ed5f5eb8894457506,PodSandboxId:a1cf7b7ef6f41616b120adf62166fb018ce255bc7069e3e0fda6f2086db0fa45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723681273710128815,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681273656731817,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681273612612251,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edee09d480aed745af29289f4e354836948af49f83b51332c70381c2589a7b70,PodSandboxId:e430c0bc26b2557fa2ba39cf57c7729ce11889df4d2da1c10d04e7f56489f12e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681273588332289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21803305-6713-4273-b1b2-cf7a3713e82d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.050626608Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d4a5357-885b-4c6d-a007-3f789fedf7a9 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.050704567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d4a5357-885b-4c6d-a007-3f789fedf7a9 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.051751857Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77728788-edee-4b2f-9cf3-1091c31cb60c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.052685182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681654052658031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77728788-edee-4b2f-9cf3-1091c31cb60c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.053248925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39008d8e-1816-4d6d-af93-715101733a1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.053375648Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39008d8e-1816-4d6d-af93-715101733a1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.053638943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681438171701468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c05051caebc6b89e60379c49e52352cbd01e34ef4efe6f58a5441cb275e051d,PodSandboxId:e6e8146f29bde538c7ae23bcea4317033e3c3f8902a557af46925d5710c262bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723681299723187197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299671457880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299673848624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a9
89-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723681287926625552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172368128
4364979513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67611ae45f1e5eeda73fa4909e4ae85ff1de3ce19a810bf0cb7140feb5211759,PodSandboxId:77e4316165593ea75a453c19c9fddf5203bfd45898f21e49c9fc9b83d291e22d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172368127617
1198759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e65923f5ca343c7ad1958ac0690ea3f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9038fb04ce7173166cb52181ceecd41cf82d733826ddf68ed5f5eb8894457506,PodSandboxId:a1cf7b7ef6f41616b120adf62166fb018ce255bc7069e3e0fda6f2086db0fa45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723681273710128815,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681273656731817,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681273612612251,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edee09d480aed745af29289f4e354836948af49f83b51332c70381c2589a7b70,PodSandboxId:e430c0bc26b2557fa2ba39cf57c7729ce11889df4d2da1c10d04e7f56489f12e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681273588332289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39008d8e-1816-4d6d-af93-715101733a1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.086636979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d524e688-ecfd-4cbf-8ae1-2f1066f74e77 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.086722433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d524e688-ecfd-4cbf-8ae1-2f1066f74e77 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.087753604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91c74847-2442-4af3-896b-3118b416d16f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.088327478Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681654088303747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91c74847-2442-4af3-896b-3118b416d16f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.088839369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e946b276-1dbf-4f19-b6a8-f07676a8f2d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.088900824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e946b276-1dbf-4f19-b6a8-f07676a8f2d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:27:34 ha-863044 crio[681]: time="2024-08-15 00:27:34.089184423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681438171701468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c05051caebc6b89e60379c49e52352cbd01e34ef4efe6f58a5441cb275e051d,PodSandboxId:e6e8146f29bde538c7ae23bcea4317033e3c3f8902a557af46925d5710c262bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723681299723187197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299671457880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299673848624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a9
89-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723681287926625552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172368128
4364979513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67611ae45f1e5eeda73fa4909e4ae85ff1de3ce19a810bf0cb7140feb5211759,PodSandboxId:77e4316165593ea75a453c19c9fddf5203bfd45898f21e49c9fc9b83d291e22d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172368127617
1198759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e65923f5ca343c7ad1958ac0690ea3f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9038fb04ce7173166cb52181ceecd41cf82d733826ddf68ed5f5eb8894457506,PodSandboxId:a1cf7b7ef6f41616b120adf62166fb018ce255bc7069e3e0fda6f2086db0fa45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723681273710128815,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681273656731817,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681273612612251,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edee09d480aed745af29289f4e354836948af49f83b51332c70381c2589a7b70,PodSandboxId:e430c0bc26b2557fa2ba39cf57c7729ce11889df4d2da1c10d04e7f56489f12e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681273588332289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e946b276-1dbf-4f19-b6a8-f07676a8f2d7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4a3e7281c498f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e9555e65cebe7       busybox-7dff88458-ck6d9
	8c05051caebc6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   e6e8146f29bde       storage-provisioner
	770157c751290       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   1334a86739ccf       coredns-6f6b679f8f-bc2jh
	a6304cc907b70       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   4feecb19b205a       coredns-6f6b679f8f-jxpqd
	024782bd78877       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   c2b2f0c2bdc2e       kindnet-ptbpb
	5d1d7d03658b7       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   a6a3b389836fc       kube-proxy-758vr
	67611ae45f1e5       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   77e4316165593       kube-vip-ha-863044
	9038fb04ce717       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   a1cf7b7ef6f41       kube-controller-manager-ha-863044
	0624b371b469a       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   ba41c766be2d5       kube-scheduler-ha-863044
	acf9154524991       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   1825ea5e56cf4       etcd-ha-863044
	edee09d480aed       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   e430c0bc26b25       kube-apiserver-ha-863044
	
	
	==> coredns [770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e] <==
	[INFO] 10.244.0.4:45424 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003457281s
	[INFO] 10.244.0.4:44072 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000187168s
	[INFO] 10.244.2.2:55108 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149056s
	[INFO] 10.244.2.2:41293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000385323s
	[INFO] 10.244.2.2:38729 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145689s
	[INFO] 10.244.2.2:33124 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000292113s
	[INFO] 10.244.1.2:33531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000255406s
	[INFO] 10.244.1.2:51132 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001668147s
	[INFO] 10.244.1.2:42284 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114325s
	[INFO] 10.244.1.2:50113 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066268s
	[INFO] 10.244.1.2:52660 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013458s
	[INFO] 10.244.0.4:46269 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091339s
	[INFO] 10.244.0.4:59422 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042431s
	[INFO] 10.244.2.2:36516 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086546s
	[INFO] 10.244.1.2:57808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122743s
	[INFO] 10.244.1.2:32830 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116945s
	[INFO] 10.244.1.2:51392 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008307s
	[INFO] 10.244.0.4:42010 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00031726s
	[INFO] 10.244.2.2:44915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143127s
	[INFO] 10.244.2.2:37741 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170015s
	[INFO] 10.244.2.2:58647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130581s
	[INFO] 10.244.1.2:49418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247229s
	[INFO] 10.244.1.2:44042 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127451s
	[INFO] 10.244.1.2:41801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00015235s
	[INFO] 10.244.1.2:51078 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176731s
	
	
	==> coredns [a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787] <==
	[INFO] 10.244.0.4:45311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000930137s
	[INFO] 10.244.0.4:39922 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00305449s
	[INFO] 10.244.2.2:33332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115309s
	[INFO] 10.244.2.2:43902 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001291279s
	[INFO] 10.244.2.2:56904 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001340981s
	[INFO] 10.244.1.2:32926 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000109486s
	[INFO] 10.244.0.4:35014 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015446s
	[INFO] 10.244.0.4:46414 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148102s
	[INFO] 10.244.2.2:51282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002016555s
	[INFO] 10.244.2.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001529953s
	[INFO] 10.244.2.2:42863 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00043817s
	[INFO] 10.244.2.2:39074 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067798s
	[INFO] 10.244.1.2:52314 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192016s
	[INFO] 10.244.1.2:58476 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001116995s
	[INFO] 10.244.1.2:39360 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.001839118s
	[INFO] 10.244.0.4:51814 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012471s
	[INFO] 10.244.0.4:40547 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083981s
	[INFO] 10.244.2.2:34181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015996s
	[INFO] 10.244.2.2:56520 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000727856s
	[INFO] 10.244.2.2:38242 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103367s
	[INFO] 10.244.1.2:50032 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110327s
	[INFO] 10.244.0.4:55523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123577s
	[INFO] 10.244.0.4:42586 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010348s
	[INFO] 10.244.0.4:36103 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000184736s
	[INFO] 10.244.2.2:57332 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000163958s
	
	
	==> describe nodes <==
	Name:               ha-863044
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_21_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:21:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:27:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:24:23 +0000   Thu, 15 Aug 2024 00:21:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:24:23 +0000   Thu, 15 Aug 2024 00:21:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:24:23 +0000   Thu, 15 Aug 2024 00:21:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:24:23 +0000   Thu, 15 Aug 2024 00:21:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-863044
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e33f2588c28f4daf846273c46c5ec17c
	  System UUID:                e33f2588-c28f-4daf-8462-73c46c5ec17c
	  Boot ID:                    262603d0-6087-4822-8e6c-89d7a28279b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ck6d9              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 coredns-6f6b679f8f-bc2jh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m10s
	  kube-system                 coredns-6f6b679f8f-jxpqd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m10s
	  kube-system                 etcd-ha-863044                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m15s
	  kube-system                 kindnet-ptbpb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m11s
	  kube-system                 kube-apiserver-ha-863044             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-controller-manager-ha-863044    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-proxy-758vr                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-scheduler-ha-863044             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-vip-ha-863044                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m9s                   kube-proxy       
	  Normal  Starting                 6m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m21s (x7 over 6m22s)  kubelet          Node ha-863044 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m21s (x8 over 6m22s)  kubelet          Node ha-863044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s (x8 over 6m22s)  kubelet          Node ha-863044 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s                  kubelet          Node ha-863044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s                  kubelet          Node ha-863044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s                  kubelet          Node ha-863044 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m11s                  node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Normal  NodeReady                5m55s                  kubelet          Node ha-863044 status is now: NodeReady
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	
	
	Name:               ha-863044-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_22_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:22:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:25:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 00:24:18 +0000   Thu, 15 Aug 2024 00:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 00:24:18 +0000   Thu, 15 Aug 2024 00:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 00:24:18 +0000   Thu, 15 Aug 2024 00:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 00:24:18 +0000   Thu, 15 Aug 2024 00:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    ha-863044-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 877b666314684accbfd657286f8d0095
	  System UUID:                877b6663-1468-4acc-bfd6-57286f8d0095
	  Boot ID:                    5a408699-89f8-44af-a389-c8beb5731e48
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zmr7b                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 etcd-ha-863044-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m17s
	  kube-system                 kindnet-xpnzd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m19s
	  kube-system                 kube-apiserver-ha-863044-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-controller-manager-ha-863044-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-proxy-6l4gp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-scheduler-ha-863044-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-vip-ha-863044-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node ha-863044-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node ha-863044-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x7 over 5m19s)  kubelet          Node ha-863044-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-863044-m02 status is now: NodeNotReady
	
	
	Name:               ha-863044-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_23_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:23:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:27:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:24:27 +0000   Thu, 15 Aug 2024 00:23:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:24:27 +0000   Thu, 15 Aug 2024 00:23:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:24:27 +0000   Thu, 15 Aug 2024 00:23:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:24:27 +0000   Thu, 15 Aug 2024 00:23:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.30
	  Hostname:    ha-863044-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba0a91434394dddbc59d67dd539b2b7
	  System UUID:                bba0a914-3439-4ddd-bc59-d67dd539b2b7
	  Boot ID:                    ee412178-48eb-40cc-833e-05ae47d59349
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dpcjf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 etcd-ha-863044-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m7s
	  kube-system                 kindnet-jdl2d                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m9s
	  kube-system                 kube-apiserver-ha-863044-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-controller-manager-ha-863044-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-proxy-qxmqn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-scheduler-ha-863044-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-vip-ha-863044-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node ha-863044-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node ha-863044-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node ha-863044-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-863044-m03 event: Registered Node ha-863044-m03 in Controller
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-863044-m03 event: Registered Node ha-863044-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-863044-m03 event: Registered Node ha-863044-m03 in Controller
	
	
	Name:               ha-863044-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_24_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:24:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:27:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:25:05 +0000   Thu, 15 Aug 2024 00:24:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:25:05 +0000   Thu, 15 Aug 2024 00:24:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:25:05 +0000   Thu, 15 Aug 2024 00:24:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:25:05 +0000   Thu, 15 Aug 2024 00:24:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    ha-863044-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 29de5816079a4aa6bb73571d88da2d1b
	  System UUID:                29de5816-079a-4aa6-bb73-571d88da2d1b
	  Boot ID:                    0cdcf6dc-9f15-484d-b8ad-776471728809
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7r4h2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m
	  kube-system                 kube-proxy-72j9n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m55s            kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-863044-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-863044-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-863044-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal  RegisteredNode           2m55s            node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal  RegisteredNode           2m55s            node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal  NodeReady                2m40s            kubelet          Node ha-863044-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug15 00:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050133] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036788] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.709914] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.846087] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.586519] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug15 00:21] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.061023] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060159] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.174439] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.118153] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.259429] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +3.778855] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.212652] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.060600] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.151808] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.077604] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.187372] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.703882] kauditd_printk_skb: 23 callbacks suppressed
	[Aug15 00:22] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6] <==
	{"level":"warn","ts":"2024-08-15T00:27:34.127806Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.129694Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.131116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.180803Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.215553Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.280896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.340230Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.347671Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.351488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.360862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.371879Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.378820Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.381925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.382378Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.385806Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.392873Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.398755Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.404938Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.408134Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.410990Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.435351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.439259Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.444497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.449901Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:27:34.481207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:27:34 up 6 min,  0 users,  load average: 0.03, 0.18, 0.10
	Linux ha-863044 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d] <==
	I0815 00:26:58.920186       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:27:08.923152       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:27:08.923250       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:27:08.923392       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:27:08.923414       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:27:08.923477       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:27:08.923495       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:27:08.923549       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:27:08.923567       1 main.go:299] handling current node
	I0815 00:27:18.928841       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:27:18.928895       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:27:18.929114       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:27:18.929134       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:27:18.929193       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:27:18.929211       1 main.go:299] handling current node
	I0815 00:27:18.929225       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:27:18.929229       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:27:28.919584       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:27:28.919622       1 main.go:299] handling current node
	I0815 00:27:28.919640       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:27:28.919645       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:27:28.919787       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:27:28.919806       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:27:28.919864       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:27:28.919870       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [edee09d480aed745af29289f4e354836948af49f83b51332c70381c2589a7b70] <==
	W0815 00:21:18.280898       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.6]
	I0815 00:21:18.281778       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 00:21:18.296853       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 00:21:18.615915       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 00:21:19.756501       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 00:21:19.773059       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0815 00:21:19.953993       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 00:21:23.865374       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0815 00:21:24.272117       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0815 00:23:58.977616       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40932: use of closed network connection
	E0815 00:23:59.158964       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40956: use of closed network connection
	E0815 00:23:59.332013       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40974: use of closed network connection
	E0815 00:23:59.509349       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40996: use of closed network connection
	E0815 00:23:59.691982       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41014: use of closed network connection
	E0815 00:23:59.884601       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41036: use of closed network connection
	E0815 00:24:00.048860       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41058: use of closed network connection
	E0815 00:24:00.219559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41074: use of closed network connection
	E0815 00:24:00.393751       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41078: use of closed network connection
	E0815 00:24:00.676450       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41108: use of closed network connection
	E0815 00:24:00.835680       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41134: use of closed network connection
	E0815 00:24:01.016971       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41148: use of closed network connection
	E0815 00:24:01.193382       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41160: use of closed network connection
	E0815 00:24:01.359759       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41186: use of closed network connection
	E0815 00:24:01.527956       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41202: use of closed network connection
	W0815 00:25:28.294687       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.30 192.168.39.6]
	
	
	==> kube-controller-manager [9038fb04ce7173166cb52181ceecd41cf82d733826ddf68ed5f5eb8894457506] <==
	E0815 00:24:34.327823       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-vzkxz failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-vzkxz\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0815 00:24:34.728988       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-863044-m04\" does not exist"
	I0815 00:24:34.754572       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-863044-m04" podCIDRs=["10.244.3.0/24"]
	I0815 00:24:34.754634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:34.754669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:34.777657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:34.798636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:35.835650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:38.423648       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-863044-m04"
	I0815 00:24:38.482337       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:39.409299       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:39.442247       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:45.132358       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:54.253603       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-863044-m04"
	I0815 00:24:54.254109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:54.268789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:54.424933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:25:05.724012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:25:49.462565       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m02"
	I0815 00:25:49.462766       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-863044-m04"
	I0815 00:25:49.486494       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m02"
	I0815 00:25:49.548685       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.415198ms"
	I0815 00:25:49.549390       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.623µs"
	I0815 00:25:53.458494       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m02"
	I0815 00:25:54.720986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m02"
	
	
	==> kube-proxy [5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 00:21:24.752099       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 00:21:24.765176       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	E0815 00:21:24.765269       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:21:24.839381       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 00:21:24.839433       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 00:21:24.839463       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:21:24.843188       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:21:24.843505       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:21:24.843526       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:21:24.844946       1 config.go:197] "Starting service config controller"
	I0815 00:21:24.844961       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:21:24.844979       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:21:24.844992       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:21:24.845530       1 config.go:326] "Starting node config controller"
	I0815 00:21:24.845537       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:21:24.945136       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:21:24.945243       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:21:24.946555       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c] <==
	W0815 00:21:17.683275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:21:17.683323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:21:17.699374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 00:21:17.699429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:21:17.705353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:21:17.705448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:21:17.757345       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 00:21:17.757394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:21:17.813621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 00:21:17.813720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 00:21:17.870456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 00:21:17.870590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 00:21:19.967565       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 00:23:26.029190       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lcjxq\": pod kindnet-lcjxq is already assigned to node \"ha-863044-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-lcjxq" node="ha-863044-m03"
	E0815 00:23:26.029523       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 15a31f4d-5cbe-4ca9-b0fb-d0ce15a0d3b5(kube-system/kindnet-lcjxq) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lcjxq"
	E0815 00:23:26.029697       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lcjxq\": pod kindnet-lcjxq is already assigned to node \"ha-863044-m03\"" pod="kube-system/kindnet-lcjxq"
	I0815 00:23:26.029815       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lcjxq" node="ha-863044-m03"
	E0815 00:24:34.806628       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hhvjh\": pod kube-proxy-hhvjh is already assigned to node \"ha-863044-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hhvjh" node="ha-863044-m04"
	E0815 00:24:34.808667       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4fa2048e-40a6-4d67-9a16-e6d68caecb6b(kube-system/kube-proxy-hhvjh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hhvjh"
	E0815 00:24:34.809740       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hhvjh\": pod kube-proxy-hhvjh is already assigned to node \"ha-863044-m04\"" pod="kube-system/kube-proxy-hhvjh"
	I0815 00:24:34.809950       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hhvjh" node="ha-863044-m04"
	E0815 00:24:34.844902       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5ptdm\": pod kube-proxy-5ptdm is already assigned to node \"ha-863044-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5ptdm" node="ha-863044-m04"
	E0815 00:24:34.845683       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5ac2ee81-5268-49b4-80fc-2b9950b30cad(kube-system/kube-proxy-5ptdm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5ptdm"
	E0815 00:24:34.845833       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5ptdm\": pod kube-proxy-5ptdm is already assigned to node \"ha-863044-m04\"" pod="kube-system/kube-proxy-5ptdm"
	I0815 00:24:34.845899       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5ptdm" node="ha-863044-m04"
	
	
	==> kubelet <==
	Aug 15 00:26:19 ha-863044 kubelet[1326]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 00:26:19 ha-863044 kubelet[1326]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 00:26:19 ha-863044 kubelet[1326]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 00:26:19 ha-863044 kubelet[1326]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 00:26:19 ha-863044 kubelet[1326]: E0815 00:26:19.994415    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681579994000376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:26:19 ha-863044 kubelet[1326]: E0815 00:26:19.994457    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681579994000376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:26:29 ha-863044 kubelet[1326]: E0815 00:26:29.995677    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681589995387756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:26:29 ha-863044 kubelet[1326]: E0815 00:26:29.995703    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681589995387756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:26:39 ha-863044 kubelet[1326]: E0815 00:26:39.997255    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681599996938930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:26:39 ha-863044 kubelet[1326]: E0815 00:26:39.997293    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681599996938930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:26:49 ha-863044 kubelet[1326]: E0815 00:26:49.998366    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681609998132218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:26:49 ha-863044 kubelet[1326]: E0815 00:26:49.998406    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681609998132218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:00 ha-863044 kubelet[1326]: E0815 00:27:00.003592    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681620002909016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:00 ha-863044 kubelet[1326]: E0815 00:27:00.003628    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681620002909016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:10 ha-863044 kubelet[1326]: E0815 00:27:10.004684    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681630004433052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:10 ha-863044 kubelet[1326]: E0815 00:27:10.004721    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681630004433052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:19 ha-863044 kubelet[1326]: E0815 00:27:19.906537    1326 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 00:27:19 ha-863044 kubelet[1326]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 00:27:19 ha-863044 kubelet[1326]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 00:27:19 ha-863044 kubelet[1326]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 00:27:19 ha-863044 kubelet[1326]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 00:27:20 ha-863044 kubelet[1326]: E0815 00:27:20.006242    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681640005738508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:20 ha-863044 kubelet[1326]: E0815 00:27:20.006265    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681640005738508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:30 ha-863044 kubelet[1326]: E0815 00:27:30.007732    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681650007139605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:30 ha-863044 kubelet[1326]: E0815 00:27:30.007797    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681650007139605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-863044 -n ha-863044
helpers_test.go:261: (dbg) Run:  kubectl --context ha-863044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr: exit status 3 (3.192632908s)

                                                
                                                
-- stdout --
	ha-863044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863044-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:27:38.972823   35515 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:27:38.972922   35515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:27:38.972933   35515 out.go:304] Setting ErrFile to fd 2...
	I0815 00:27:38.972939   35515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:27:38.973124   35515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:27:38.973365   35515 out.go:298] Setting JSON to false
	I0815 00:27:38.973401   35515 mustload.go:65] Loading cluster: ha-863044
	I0815 00:27:38.973499   35515 notify.go:220] Checking for updates...
	I0815 00:27:38.974613   35515 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:27:38.974679   35515 status.go:255] checking status of ha-863044 ...
	I0815 00:27:38.975364   35515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:38.975405   35515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:38.990970   35515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37839
	I0815 00:27:38.991394   35515 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:38.991893   35515 main.go:141] libmachine: Using API Version  1
	I0815 00:27:38.991935   35515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:38.992281   35515 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:38.992483   35515 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:27:38.994075   35515 status.go:330] ha-863044 host status = "Running" (err=<nil>)
	I0815 00:27:38.994090   35515 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:27:38.994381   35515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:38.994424   35515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:39.009342   35515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
	I0815 00:27:39.009716   35515 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:39.010145   35515 main.go:141] libmachine: Using API Version  1
	I0815 00:27:39.010158   35515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:39.010477   35515 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:39.010655   35515 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:27:39.013202   35515 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:39.013534   35515 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:27:39.013552   35515 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:39.013724   35515 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:27:39.013997   35515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:39.014030   35515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:39.028315   35515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32895
	I0815 00:27:39.028709   35515 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:39.029152   35515 main.go:141] libmachine: Using API Version  1
	I0815 00:27:39.029170   35515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:39.029456   35515 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:39.029639   35515 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:27:39.029814   35515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:39.029839   35515 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:27:39.032289   35515 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:39.032789   35515 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:27:39.032819   35515 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:39.032971   35515 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:27:39.033140   35515 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:27:39.033278   35515 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:27:39.033412   35515 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:27:39.115793   35515 ssh_runner.go:195] Run: systemctl --version
	I0815 00:27:39.121406   35515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:39.135391   35515 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:27:39.135425   35515 api_server.go:166] Checking apiserver status ...
	I0815 00:27:39.135536   35515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:27:39.149435   35515 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup
	W0815 00:27:39.166188   35515 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:27:39.166243   35515 ssh_runner.go:195] Run: ls
	I0815 00:27:39.171030   35515 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:27:39.175164   35515 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:27:39.175205   35515 status.go:422] ha-863044 apiserver status = Running (err=<nil>)
	I0815 00:27:39.175217   35515 status.go:257] ha-863044 status: &{Name:ha-863044 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:27:39.175237   35515 status.go:255] checking status of ha-863044-m02 ...
	I0815 00:27:39.175635   35515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:39.175682   35515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:39.190371   35515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42921
	I0815 00:27:39.190779   35515 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:39.191223   35515 main.go:141] libmachine: Using API Version  1
	I0815 00:27:39.191240   35515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:39.191533   35515 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:39.191711   35515 main.go:141] libmachine: (ha-863044-m02) Calling .GetState
	I0815 00:27:39.193107   35515 status.go:330] ha-863044-m02 host status = "Running" (err=<nil>)
	I0815 00:27:39.193124   35515 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:27:39.193511   35515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:39.193553   35515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:39.208150   35515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35687
	I0815 00:27:39.208493   35515 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:39.208905   35515 main.go:141] libmachine: Using API Version  1
	I0815 00:27:39.208926   35515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:39.209180   35515 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:39.209330   35515 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:27:39.212127   35515 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:39.212534   35515 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:27:39.212570   35515 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:39.212681   35515 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:27:39.212957   35515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:39.213010   35515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:39.227151   35515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0815 00:27:39.227495   35515 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:39.228033   35515 main.go:141] libmachine: Using API Version  1
	I0815 00:27:39.228059   35515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:39.228381   35515 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:39.228537   35515 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:27:39.228731   35515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:39.228751   35515 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:27:39.231228   35515 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:39.231599   35515 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:27:39.231626   35515 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:39.231750   35515 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:27:39.231916   35515 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:27:39.232054   35515 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:27:39.232195   35515 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	W0815 00:27:41.792977   35515 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.170:22: connect: no route to host
	W0815 00:27:41.793076   35515 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	E0815 00:27:41.793101   35515 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:27:41.793109   35515 status.go:257] ha-863044-m02 status: &{Name:ha-863044-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 00:27:41.793131   35515 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:27:41.793138   35515 status.go:255] checking status of ha-863044-m03 ...
	I0815 00:27:41.793442   35515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:41.793483   35515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:41.808237   35515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I0815 00:27:41.808647   35515 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:41.809088   35515 main.go:141] libmachine: Using API Version  1
	I0815 00:27:41.809108   35515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:41.809410   35515 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:41.809587   35515 main.go:141] libmachine: (ha-863044-m03) Calling .GetState
	I0815 00:27:41.811005   35515 status.go:330] ha-863044-m03 host status = "Running" (err=<nil>)
	I0815 00:27:41.811019   35515 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:27:41.811289   35515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:41.811326   35515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:41.825428   35515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0815 00:27:41.825919   35515 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:41.826381   35515 main.go:141] libmachine: Using API Version  1
	I0815 00:27:41.826412   35515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:41.826731   35515 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:41.826899   35515 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:27:41.829385   35515 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:41.829742   35515 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:27:41.829759   35515 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:41.829872   35515 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:27:41.830161   35515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:41.830200   35515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:41.845745   35515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0815 00:27:41.846117   35515 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:41.846545   35515 main.go:141] libmachine: Using API Version  1
	I0815 00:27:41.846565   35515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:41.846836   35515 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:41.846995   35515 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:27:41.847174   35515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:41.847191   35515 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:27:41.849953   35515 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:41.850291   35515 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:27:41.850329   35515 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:41.850439   35515 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:27:41.850599   35515 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:27:41.850730   35515 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:27:41.850853   35515 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:27:41.931618   35515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:41.945231   35515 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:27:41.945261   35515 api_server.go:166] Checking apiserver status ...
	I0815 00:27:41.945303   35515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:27:41.958453   35515 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	W0815 00:27:41.966917   35515 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:27:41.966975   35515 ssh_runner.go:195] Run: ls
	I0815 00:27:41.970971   35515 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:27:41.975019   35515 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:27:41.975042   35515 status.go:422] ha-863044-m03 apiserver status = Running (err=<nil>)
	I0815 00:27:41.975050   35515 status.go:257] ha-863044-m03 status: &{Name:ha-863044-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:27:41.975063   35515 status.go:255] checking status of ha-863044-m04 ...
	I0815 00:27:41.975348   35515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:41.975379   35515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:41.991517   35515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0815 00:27:41.991909   35515 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:41.992339   35515 main.go:141] libmachine: Using API Version  1
	I0815 00:27:41.992356   35515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:41.992698   35515 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:41.992878   35515 main.go:141] libmachine: (ha-863044-m04) Calling .GetState
	I0815 00:27:41.994682   35515 status.go:330] ha-863044-m04 host status = "Running" (err=<nil>)
	I0815 00:27:41.994698   35515 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:27:41.994976   35515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:41.995012   35515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:42.010413   35515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38977
	I0815 00:27:42.010770   35515 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:42.011248   35515 main.go:141] libmachine: Using API Version  1
	I0815 00:27:42.011271   35515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:42.011593   35515 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:42.011784   35515 main.go:141] libmachine: (ha-863044-m04) Calling .GetIP
	I0815 00:27:42.014147   35515 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:42.014527   35515 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:27:42.014552   35515 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:42.014704   35515 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:27:42.015027   35515 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:42.015076   35515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:42.029471   35515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44601
	I0815 00:27:42.029880   35515 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:42.030315   35515 main.go:141] libmachine: Using API Version  1
	I0815 00:27:42.030334   35515 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:42.030602   35515 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:42.030759   35515 main.go:141] libmachine: (ha-863044-m04) Calling .DriverName
	I0815 00:27:42.030926   35515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:42.030947   35515 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHHostname
	I0815 00:27:42.033304   35515 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:42.033714   35515 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:27:42.033744   35515 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:42.033892   35515 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHPort
	I0815 00:27:42.034046   35515 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHKeyPath
	I0815 00:27:42.034164   35515 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHUsername
	I0815 00:27:42.034273   35515 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m04/id_rsa Username:docker}
	I0815 00:27:42.112061   35515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:42.125251   35515 status.go:257] ha-863044-m04 status: &{Name:ha-863044-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr: exit status 3 (4.789115452s)

                                                
                                                
-- stdout --
	ha-863044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863044-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:27:43.511733   35617 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:27:43.512078   35617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:27:43.512100   35617 out.go:304] Setting ErrFile to fd 2...
	I0815 00:27:43.512108   35617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:27:43.512580   35617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:27:43.512932   35617 out.go:298] Setting JSON to false
	I0815 00:27:43.512955   35617 mustload.go:65] Loading cluster: ha-863044
	I0815 00:27:43.513278   35617 notify.go:220] Checking for updates...
	I0815 00:27:43.513790   35617 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:27:43.513806   35617 status.go:255] checking status of ha-863044 ...
	I0815 00:27:43.514165   35617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:43.514213   35617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:43.533972   35617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I0815 00:27:43.534426   35617 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:43.534911   35617 main.go:141] libmachine: Using API Version  1
	I0815 00:27:43.534931   35617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:43.535268   35617 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:43.535413   35617 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:27:43.537063   35617 status.go:330] ha-863044 host status = "Running" (err=<nil>)
	I0815 00:27:43.537076   35617 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:27:43.537319   35617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:43.537347   35617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:43.551562   35617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39329
	I0815 00:27:43.551909   35617 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:43.552295   35617 main.go:141] libmachine: Using API Version  1
	I0815 00:27:43.552313   35617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:43.552610   35617 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:43.552800   35617 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:27:43.555196   35617 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:43.555552   35617 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:27:43.555592   35617 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:43.555718   35617 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:27:43.556090   35617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:43.556124   35617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:43.569969   35617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42775
	I0815 00:27:43.570350   35617 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:43.570787   35617 main.go:141] libmachine: Using API Version  1
	I0815 00:27:43.570814   35617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:43.571078   35617 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:43.571246   35617 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:27:43.571398   35617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:43.571417   35617 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:27:43.574003   35617 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:43.574345   35617 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:27:43.574374   35617 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:43.574536   35617 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:27:43.574678   35617 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:27:43.574816   35617 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:27:43.574936   35617 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:27:43.660212   35617 ssh_runner.go:195] Run: systemctl --version
	I0815 00:27:43.665991   35617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:43.679956   35617 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:27:43.679988   35617 api_server.go:166] Checking apiserver status ...
	I0815 00:27:43.680026   35617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:27:43.693703   35617 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup
	W0815 00:27:43.702612   35617 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:27:43.702675   35617 ssh_runner.go:195] Run: ls
	I0815 00:27:43.706501   35617 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:27:43.710698   35617 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:27:43.710722   35617 status.go:422] ha-863044 apiserver status = Running (err=<nil>)
	I0815 00:27:43.710731   35617 status.go:257] ha-863044 status: &{Name:ha-863044 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:27:43.710765   35617 status.go:255] checking status of ha-863044-m02 ...
	I0815 00:27:43.711051   35617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:43.711082   35617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:43.726088   35617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43573
	I0815 00:27:43.726490   35617 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:43.727012   35617 main.go:141] libmachine: Using API Version  1
	I0815 00:27:43.727036   35617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:43.727342   35617 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:43.727557   35617 main.go:141] libmachine: (ha-863044-m02) Calling .GetState
	I0815 00:27:43.729049   35617 status.go:330] ha-863044-m02 host status = "Running" (err=<nil>)
	I0815 00:27:43.729062   35617 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:27:43.729350   35617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:43.729392   35617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:43.744345   35617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35099
	I0815 00:27:43.744709   35617 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:43.745242   35617 main.go:141] libmachine: Using API Version  1
	I0815 00:27:43.745260   35617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:43.745558   35617 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:43.745742   35617 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:27:43.748595   35617 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:43.749082   35617 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:27:43.749109   35617 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:43.749217   35617 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:27:43.749635   35617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:43.749678   35617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:43.765002   35617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44227
	I0815 00:27:43.765491   35617 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:43.766025   35617 main.go:141] libmachine: Using API Version  1
	I0815 00:27:43.766063   35617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:43.766437   35617 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:43.766691   35617 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:27:43.766917   35617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:43.766941   35617 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:27:43.769683   35617 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:43.770091   35617 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:27:43.770117   35617 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:43.770289   35617 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:27:43.770454   35617 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:27:43.770640   35617 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:27:43.770872   35617 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	W0815 00:27:44.864896   35617 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:27:44.864954   35617 retry.go:31] will retry after 221.920154ms: dial tcp 192.168.39.170:22: connect: no route to host
	W0815 00:27:47.936980   35617 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.170:22: connect: no route to host
	W0815 00:27:47.937092   35617 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	E0815 00:27:47.937113   35617 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:27:47.937125   35617 status.go:257] ha-863044-m02 status: &{Name:ha-863044-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 00:27:47.937156   35617 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:27:47.937168   35617 status.go:255] checking status of ha-863044-m03 ...
	I0815 00:27:47.937462   35617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:47.937509   35617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:47.951956   35617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40037
	I0815 00:27:47.952323   35617 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:47.952815   35617 main.go:141] libmachine: Using API Version  1
	I0815 00:27:47.952836   35617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:47.953198   35617 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:47.953389   35617 main.go:141] libmachine: (ha-863044-m03) Calling .GetState
	I0815 00:27:47.954898   35617 status.go:330] ha-863044-m03 host status = "Running" (err=<nil>)
	I0815 00:27:47.954911   35617 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:27:47.955216   35617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:47.955245   35617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:47.969153   35617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I0815 00:27:47.969518   35617 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:47.969897   35617 main.go:141] libmachine: Using API Version  1
	I0815 00:27:47.969913   35617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:47.970165   35617 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:47.970342   35617 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:27:47.972944   35617 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:47.973362   35617 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:27:47.973383   35617 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:47.973521   35617 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:27:47.973843   35617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:47.973879   35617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:47.987441   35617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33285
	I0815 00:27:47.987773   35617 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:47.988185   35617 main.go:141] libmachine: Using API Version  1
	I0815 00:27:47.988205   35617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:47.988492   35617 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:47.988698   35617 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:27:47.988857   35617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:47.988874   35617 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:27:47.991245   35617 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:47.991594   35617 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:27:47.991624   35617 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:47.991747   35617 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:27:47.991888   35617 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:27:47.992014   35617 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:27:47.992115   35617 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:27:48.068082   35617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:48.082656   35617 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:27:48.082680   35617 api_server.go:166] Checking apiserver status ...
	I0815 00:27:48.082711   35617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:27:48.096591   35617 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	W0815 00:27:48.104894   35617 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:27:48.104941   35617 ssh_runner.go:195] Run: ls
	I0815 00:27:48.109196   35617 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:27:48.113383   35617 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:27:48.113402   35617 status.go:422] ha-863044-m03 apiserver status = Running (err=<nil>)
	I0815 00:27:48.113410   35617 status.go:257] ha-863044-m03 status: &{Name:ha-863044-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:27:48.113423   35617 status.go:255] checking status of ha-863044-m04 ...
	I0815 00:27:48.113703   35617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:48.113744   35617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:48.128813   35617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35359
	I0815 00:27:48.129209   35617 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:48.129653   35617 main.go:141] libmachine: Using API Version  1
	I0815 00:27:48.129673   35617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:48.129989   35617 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:48.130170   35617 main.go:141] libmachine: (ha-863044-m04) Calling .GetState
	I0815 00:27:48.131496   35617 status.go:330] ha-863044-m04 host status = "Running" (err=<nil>)
	I0815 00:27:48.131511   35617 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:27:48.131800   35617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:48.131838   35617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:48.145851   35617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I0815 00:27:48.146208   35617 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:48.146679   35617 main.go:141] libmachine: Using API Version  1
	I0815 00:27:48.146704   35617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:48.147019   35617 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:48.147202   35617 main.go:141] libmachine: (ha-863044-m04) Calling .GetIP
	I0815 00:27:48.150180   35617 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:48.150634   35617 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:27:48.150666   35617 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:48.150822   35617 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:27:48.151147   35617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:48.151189   35617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:48.167143   35617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I0815 00:27:48.167499   35617 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:48.167967   35617 main.go:141] libmachine: Using API Version  1
	I0815 00:27:48.167990   35617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:48.168270   35617 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:48.168448   35617 main.go:141] libmachine: (ha-863044-m04) Calling .DriverName
	I0815 00:27:48.168600   35617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:48.168622   35617 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHHostname
	I0815 00:27:48.171426   35617 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:48.171873   35617 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:27:48.171892   35617 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:48.172062   35617 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHPort
	I0815 00:27:48.172226   35617 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHKeyPath
	I0815 00:27:48.172365   35617 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHUsername
	I0815 00:27:48.172499   35617 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m04/id_rsa Username:docker}
	I0815 00:27:48.247473   35617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:48.260490   35617 status.go:257] ha-863044-m04 status: &{Name:ha-863044-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr: exit status 3 (4.943025766s)

                                                
                                                
-- stdout --
	ha-863044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863044-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:27:49.829913   35724 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:27:49.830157   35724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:27:49.830165   35724 out.go:304] Setting ErrFile to fd 2...
	I0815 00:27:49.830170   35724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:27:49.830351   35724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:27:49.830502   35724 out.go:298] Setting JSON to false
	I0815 00:27:49.830524   35724 mustload.go:65] Loading cluster: ha-863044
	I0815 00:27:49.830633   35724 notify.go:220] Checking for updates...
	I0815 00:27:49.831019   35724 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:27:49.831038   35724 status.go:255] checking status of ha-863044 ...
	I0815 00:27:49.831584   35724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:49.831642   35724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:49.849580   35724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39795
	I0815 00:27:49.850051   35724 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:49.850568   35724 main.go:141] libmachine: Using API Version  1
	I0815 00:27:49.850595   35724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:49.850932   35724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:49.851135   35724 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:27:49.852641   35724 status.go:330] ha-863044 host status = "Running" (err=<nil>)
	I0815 00:27:49.852674   35724 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:27:49.852934   35724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:49.852970   35724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:49.868049   35724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I0815 00:27:49.868512   35724 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:49.869244   35724 main.go:141] libmachine: Using API Version  1
	I0815 00:27:49.869278   35724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:49.869630   35724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:49.869832   35724 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:27:49.872873   35724 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:49.873280   35724 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:27:49.873311   35724 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:49.873417   35724 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:27:49.873714   35724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:49.873744   35724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:49.887905   35724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41711
	I0815 00:27:49.888287   35724 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:49.888759   35724 main.go:141] libmachine: Using API Version  1
	I0815 00:27:49.888781   35724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:49.889026   35724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:49.889189   35724 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:27:49.889394   35724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:49.889426   35724 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:27:49.891738   35724 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:49.892163   35724 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:27:49.892201   35724 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:49.892305   35724 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:27:49.892458   35724 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:27:49.892593   35724 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:27:49.892738   35724 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:27:49.979368   35724 ssh_runner.go:195] Run: systemctl --version
	I0815 00:27:49.985547   35724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:49.999371   35724 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:27:49.999406   35724 api_server.go:166] Checking apiserver status ...
	I0815 00:27:49.999448   35724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:27:50.013264   35724 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup
	W0815 00:27:50.021828   35724 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:27:50.021880   35724 ssh_runner.go:195] Run: ls
	I0815 00:27:50.025756   35724 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:27:50.031432   35724 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:27:50.031450   35724 status.go:422] ha-863044 apiserver status = Running (err=<nil>)
	I0815 00:27:50.031470   35724 status.go:257] ha-863044 status: &{Name:ha-863044 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:27:50.031492   35724 status.go:255] checking status of ha-863044-m02 ...
	I0815 00:27:50.031873   35724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:50.031913   35724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:50.046878   35724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45715
	I0815 00:27:50.047231   35724 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:50.047662   35724 main.go:141] libmachine: Using API Version  1
	I0815 00:27:50.047689   35724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:50.047972   35724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:50.048165   35724 main.go:141] libmachine: (ha-863044-m02) Calling .GetState
	I0815 00:27:50.049807   35724 status.go:330] ha-863044-m02 host status = "Running" (err=<nil>)
	I0815 00:27:50.049822   35724 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:27:50.050096   35724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:50.050144   35724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:50.064527   35724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41557
	I0815 00:27:50.064922   35724 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:50.065330   35724 main.go:141] libmachine: Using API Version  1
	I0815 00:27:50.065349   35724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:50.065680   35724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:50.065856   35724 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:27:50.068384   35724 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:50.068798   35724 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:27:50.068824   35724 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:50.068928   35724 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:27:50.069355   35724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:50.069402   35724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:50.085499   35724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40953
	I0815 00:27:50.085913   35724 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:50.086362   35724 main.go:141] libmachine: Using API Version  1
	I0815 00:27:50.086383   35724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:50.086666   35724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:50.086796   35724 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:27:50.086918   35724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:50.086941   35724 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:27:50.089585   35724 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:50.090046   35724 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:27:50.090068   35724 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:50.090211   35724 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:27:50.090391   35724 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:27:50.090552   35724 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:27:50.090698   35724 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	W0815 00:27:51.008919   35724 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:27:51.008959   35724 retry.go:31] will retry after 315.995748ms: dial tcp 192.168.39.170:22: connect: no route to host
	W0815 00:27:54.400895   35724 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.170:22: connect: no route to host
	W0815 00:27:54.400989   35724 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	E0815 00:27:54.401011   35724 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:27:54.401024   35724 status.go:257] ha-863044-m02 status: &{Name:ha-863044-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 00:27:54.401042   35724 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:27:54.401050   35724 status.go:255] checking status of ha-863044-m03 ...
	I0815 00:27:54.401358   35724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:54.401400   35724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:54.416014   35724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41759
	I0815 00:27:54.416379   35724 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:54.416839   35724 main.go:141] libmachine: Using API Version  1
	I0815 00:27:54.416861   35724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:54.417145   35724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:54.417321   35724 main.go:141] libmachine: (ha-863044-m03) Calling .GetState
	I0815 00:27:54.418875   35724 status.go:330] ha-863044-m03 host status = "Running" (err=<nil>)
	I0815 00:27:54.418887   35724 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:27:54.419170   35724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:54.419202   35724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:54.433822   35724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46751
	I0815 00:27:54.434215   35724 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:54.434660   35724 main.go:141] libmachine: Using API Version  1
	I0815 00:27:54.434682   35724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:54.434991   35724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:54.435175   35724 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:27:54.438151   35724 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:54.438533   35724 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:27:54.438554   35724 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:54.438714   35724 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:27:54.439016   35724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:54.439057   35724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:54.453738   35724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
	I0815 00:27:54.454219   35724 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:54.454658   35724 main.go:141] libmachine: Using API Version  1
	I0815 00:27:54.454680   35724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:54.454987   35724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:54.455150   35724 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:27:54.455366   35724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:54.455389   35724 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:27:54.457882   35724 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:54.458281   35724 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:27:54.458307   35724 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:27:54.458409   35724 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:27:54.458551   35724 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:27:54.458674   35724 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:27:54.458833   35724 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:27:54.535523   35724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:54.549196   35724 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:27:54.549222   35724 api_server.go:166] Checking apiserver status ...
	I0815 00:27:54.549257   35724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:27:54.561950   35724 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	W0815 00:27:54.570867   35724 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:27:54.570927   35724 ssh_runner.go:195] Run: ls
	I0815 00:27:54.575123   35724 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:27:54.579440   35724 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:27:54.579468   35724 status.go:422] ha-863044-m03 apiserver status = Running (err=<nil>)
	I0815 00:27:54.579478   35724 status.go:257] ha-863044-m03 status: &{Name:ha-863044-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:27:54.579497   35724 status.go:255] checking status of ha-863044-m04 ...
	I0815 00:27:54.579857   35724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:54.579899   35724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:54.594972   35724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42991
	I0815 00:27:54.595419   35724 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:54.595922   35724 main.go:141] libmachine: Using API Version  1
	I0815 00:27:54.595943   35724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:54.596262   35724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:54.596457   35724 main.go:141] libmachine: (ha-863044-m04) Calling .GetState
	I0815 00:27:54.598090   35724 status.go:330] ha-863044-m04 host status = "Running" (err=<nil>)
	I0815 00:27:54.598104   35724 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:27:54.598452   35724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:54.598516   35724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:54.613777   35724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45879
	I0815 00:27:54.614184   35724 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:54.614654   35724 main.go:141] libmachine: Using API Version  1
	I0815 00:27:54.614669   35724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:54.614926   35724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:54.615089   35724 main.go:141] libmachine: (ha-863044-m04) Calling .GetIP
	I0815 00:27:54.617717   35724 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:54.618091   35724 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:27:54.618153   35724 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:54.618240   35724 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:27:54.618632   35724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:54.618682   35724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:54.633384   35724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44509
	I0815 00:27:54.633748   35724 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:54.634176   35724 main.go:141] libmachine: Using API Version  1
	I0815 00:27:54.634197   35724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:54.634476   35724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:54.634670   35724 main.go:141] libmachine: (ha-863044-m04) Calling .DriverName
	I0815 00:27:54.634850   35724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:54.634866   35724 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHHostname
	I0815 00:27:54.637694   35724 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:54.638023   35724 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:27:54.638055   35724 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:27:54.638215   35724 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHPort
	I0815 00:27:54.638356   35724 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHKeyPath
	I0815 00:27:54.638490   35724 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHUsername
	I0815 00:27:54.638611   35724 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m04/id_rsa Username:docker}
	I0815 00:27:54.715896   35724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:54.730142   35724 status.go:257] ha-863044-m04 status: &{Name:ha-863044-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr: exit status 3 (3.70200061s)

                                                
                                                
-- stdout --
	ha-863044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863044-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:27:57.501500   35840 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:27:57.501606   35840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:27:57.501622   35840 out.go:304] Setting ErrFile to fd 2...
	I0815 00:27:57.501628   35840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:27:57.501786   35840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:27:57.501951   35840 out.go:298] Setting JSON to false
	I0815 00:27:57.501973   35840 mustload.go:65] Loading cluster: ha-863044
	I0815 00:27:57.502072   35840 notify.go:220] Checking for updates...
	I0815 00:27:57.502320   35840 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:27:57.502338   35840 status.go:255] checking status of ha-863044 ...
	I0815 00:27:57.502720   35840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:57.502784   35840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:57.522525   35840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0815 00:27:57.522987   35840 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:57.523519   35840 main.go:141] libmachine: Using API Version  1
	I0815 00:27:57.523539   35840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:57.523932   35840 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:57.524116   35840 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:27:57.525718   35840 status.go:330] ha-863044 host status = "Running" (err=<nil>)
	I0815 00:27:57.525735   35840 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:27:57.526016   35840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:57.526052   35840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:57.541360   35840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I0815 00:27:57.541798   35840 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:57.542290   35840 main.go:141] libmachine: Using API Version  1
	I0815 00:27:57.542310   35840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:57.542634   35840 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:57.542837   35840 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:27:57.545472   35840 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:57.545846   35840 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:27:57.545872   35840 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:57.545974   35840 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:27:57.546249   35840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:57.546282   35840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:57.561107   35840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I0815 00:27:57.561494   35840 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:57.561923   35840 main.go:141] libmachine: Using API Version  1
	I0815 00:27:57.561944   35840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:57.562234   35840 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:57.562409   35840 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:27:57.562557   35840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:57.562591   35840 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:27:57.565290   35840 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:57.565646   35840 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:27:57.565669   35840 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:27:57.565790   35840 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:27:57.565950   35840 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:27:57.566094   35840 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:27:57.566208   35840 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:27:57.651783   35840 ssh_runner.go:195] Run: systemctl --version
	I0815 00:27:57.657502   35840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:27:57.670819   35840 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:27:57.670853   35840 api_server.go:166] Checking apiserver status ...
	I0815 00:27:57.670898   35840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:27:57.683578   35840 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup
	W0815 00:27:57.691846   35840 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:27:57.691898   35840 ssh_runner.go:195] Run: ls
	I0815 00:27:57.695876   35840 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:27:57.700035   35840 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:27:57.700060   35840 status.go:422] ha-863044 apiserver status = Running (err=<nil>)
	I0815 00:27:57.700072   35840 status.go:257] ha-863044 status: &{Name:ha-863044 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:27:57.700092   35840 status.go:255] checking status of ha-863044-m02 ...
	I0815 00:27:57.700468   35840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:57.700512   35840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:57.715147   35840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
	I0815 00:27:57.715519   35840 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:57.715963   35840 main.go:141] libmachine: Using API Version  1
	I0815 00:27:57.715984   35840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:57.716260   35840 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:57.716462   35840 main.go:141] libmachine: (ha-863044-m02) Calling .GetState
	I0815 00:27:57.718086   35840 status.go:330] ha-863044-m02 host status = "Running" (err=<nil>)
	I0815 00:27:57.718105   35840 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:27:57.718402   35840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:57.718435   35840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:57.732885   35840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45469
	I0815 00:27:57.733326   35840 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:57.733759   35840 main.go:141] libmachine: Using API Version  1
	I0815 00:27:57.733780   35840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:57.734085   35840 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:57.734253   35840 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:27:57.736812   35840 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:57.737231   35840 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:27:57.737258   35840 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:57.737385   35840 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:27:57.737698   35840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:27:57.737739   35840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:27:57.753084   35840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41001
	I0815 00:27:57.753545   35840 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:27:57.754045   35840 main.go:141] libmachine: Using API Version  1
	I0815 00:27:57.754069   35840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:27:57.754445   35840 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:27:57.754638   35840 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:27:57.754831   35840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:27:57.754854   35840 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:27:57.757621   35840 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:57.758018   35840 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:27:57.758043   35840 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:27:57.758206   35840 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:27:57.758370   35840 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:27:57.758510   35840 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:27:57.758646   35840 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	W0815 00:28:00.832871   35840 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.170:22: connect: no route to host
	W0815 00:28:00.832978   35840 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	E0815 00:28:00.833000   35840 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:28:00.833012   35840 status.go:257] ha-863044-m02 status: &{Name:ha-863044-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 00:28:00.833035   35840 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:28:00.833047   35840 status.go:255] checking status of ha-863044-m03 ...
	I0815 00:28:00.833391   35840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:00.833441   35840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:00.847681   35840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0815 00:28:00.848119   35840 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:00.848604   35840 main.go:141] libmachine: Using API Version  1
	I0815 00:28:00.848644   35840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:00.848927   35840 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:00.849111   35840 main.go:141] libmachine: (ha-863044-m03) Calling .GetState
	I0815 00:28:00.850685   35840 status.go:330] ha-863044-m03 host status = "Running" (err=<nil>)
	I0815 00:28:00.850698   35840 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:28:00.850974   35840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:00.851009   35840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:00.865036   35840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0815 00:28:00.865386   35840 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:00.865795   35840 main.go:141] libmachine: Using API Version  1
	I0815 00:28:00.865812   35840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:00.866088   35840 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:00.866246   35840 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:28:00.868898   35840 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:00.869261   35840 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:28:00.869284   35840 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:00.869374   35840 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:28:00.869681   35840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:00.869713   35840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:00.884371   35840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0815 00:28:00.884782   35840 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:00.885190   35840 main.go:141] libmachine: Using API Version  1
	I0815 00:28:00.885208   35840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:00.885482   35840 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:00.885643   35840 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:28:00.885804   35840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:00.885822   35840 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:28:00.888146   35840 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:00.888522   35840 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:28:00.888546   35840 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:00.888671   35840 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:28:00.888850   35840 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:28:00.888997   35840 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:28:00.889124   35840 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:28:00.964409   35840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:28:00.981075   35840 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:28:00.981101   35840 api_server.go:166] Checking apiserver status ...
	I0815 00:28:00.981136   35840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:28:00.994950   35840 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	W0815 00:28:01.004065   35840 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:28:01.004113   35840 ssh_runner.go:195] Run: ls
	I0815 00:28:01.008550   35840 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:28:01.014344   35840 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:28:01.014365   35840 status.go:422] ha-863044-m03 apiserver status = Running (err=<nil>)
	I0815 00:28:01.014376   35840 status.go:257] ha-863044-m03 status: &{Name:ha-863044-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:28:01.014395   35840 status.go:255] checking status of ha-863044-m04 ...
	I0815 00:28:01.014806   35840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:01.014855   35840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:01.029594   35840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0815 00:28:01.029967   35840 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:01.030414   35840 main.go:141] libmachine: Using API Version  1
	I0815 00:28:01.030435   35840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:01.030737   35840 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:01.030936   35840 main.go:141] libmachine: (ha-863044-m04) Calling .GetState
	I0815 00:28:01.032377   35840 status.go:330] ha-863044-m04 host status = "Running" (err=<nil>)
	I0815 00:28:01.032389   35840 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:28:01.032724   35840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:01.032757   35840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:01.048014   35840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41033
	I0815 00:28:01.048412   35840 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:01.048849   35840 main.go:141] libmachine: Using API Version  1
	I0815 00:28:01.048866   35840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:01.049150   35840 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:01.049341   35840 main.go:141] libmachine: (ha-863044-m04) Calling .GetIP
	I0815 00:28:01.051780   35840 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:01.052173   35840 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:28:01.052208   35840 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:01.052333   35840 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:28:01.052669   35840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:01.052703   35840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:01.066599   35840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46183
	I0815 00:28:01.066931   35840 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:01.067408   35840 main.go:141] libmachine: Using API Version  1
	I0815 00:28:01.067455   35840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:01.067714   35840 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:01.067890   35840 main.go:141] libmachine: (ha-863044-m04) Calling .DriverName
	I0815 00:28:01.068032   35840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:01.068052   35840 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHHostname
	I0815 00:28:01.070322   35840 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:01.070684   35840 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:28:01.070712   35840 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:01.070847   35840 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHPort
	I0815 00:28:01.070993   35840 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHKeyPath
	I0815 00:28:01.071161   35840 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHUsername
	I0815 00:28:01.071293   35840 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m04/id_rsa Username:docker}
	I0815 00:28:01.147200   35840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:28:01.162620   35840 status.go:257] ha-863044-m04 status: &{Name:ha-863044-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr: exit status 3 (4.250751488s)

                                                
                                                
-- stdout --
	ha-863044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863044-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:28:03.419442   35940 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:28:03.419548   35940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:28:03.419559   35940 out.go:304] Setting ErrFile to fd 2...
	I0815 00:28:03.419565   35940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:28:03.419833   35940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:28:03.420040   35940 out.go:298] Setting JSON to false
	I0815 00:28:03.420065   35940 mustload.go:65] Loading cluster: ha-863044
	I0815 00:28:03.420151   35940 notify.go:220] Checking for updates...
	I0815 00:28:03.420515   35940 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:28:03.420531   35940 status.go:255] checking status of ha-863044 ...
	I0815 00:28:03.421010   35940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:03.421058   35940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:03.440360   35940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40897
	I0815 00:28:03.440794   35940 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:03.441380   35940 main.go:141] libmachine: Using API Version  1
	I0815 00:28:03.441407   35940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:03.441734   35940 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:03.441936   35940 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:28:03.443605   35940 status.go:330] ha-863044 host status = "Running" (err=<nil>)
	I0815 00:28:03.443622   35940 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:28:03.443947   35940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:03.443984   35940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:03.458691   35940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38021
	I0815 00:28:03.459134   35940 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:03.459582   35940 main.go:141] libmachine: Using API Version  1
	I0815 00:28:03.459603   35940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:03.459905   35940 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:03.460077   35940 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:28:03.462673   35940 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:28:03.463106   35940 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:28:03.463135   35940 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:28:03.463280   35940 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:28:03.463559   35940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:03.463591   35940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:03.477964   35940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42689
	I0815 00:28:03.478371   35940 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:03.478838   35940 main.go:141] libmachine: Using API Version  1
	I0815 00:28:03.478864   35940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:03.479133   35940 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:03.479267   35940 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:28:03.479458   35940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:03.479489   35940 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:28:03.482054   35940 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:28:03.482434   35940 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:28:03.482469   35940 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:28:03.482611   35940 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:28:03.482774   35940 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:28:03.482909   35940 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:28:03.483017   35940 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:28:03.568129   35940 ssh_runner.go:195] Run: systemctl --version
	I0815 00:28:03.574225   35940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:28:03.589734   35940 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:28:03.589761   35940 api_server.go:166] Checking apiserver status ...
	I0815 00:28:03.589804   35940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:28:03.603632   35940 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup
	W0815 00:28:03.613412   35940 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:28:03.613467   35940 ssh_runner.go:195] Run: ls
	I0815 00:28:03.617409   35940 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:28:03.621773   35940 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:28:03.621791   35940 status.go:422] ha-863044 apiserver status = Running (err=<nil>)
	I0815 00:28:03.621799   35940 status.go:257] ha-863044 status: &{Name:ha-863044 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:28:03.621818   35940 status.go:255] checking status of ha-863044-m02 ...
	I0815 00:28:03.622081   35940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:03.622109   35940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:03.636673   35940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36317
	I0815 00:28:03.637089   35940 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:03.637589   35940 main.go:141] libmachine: Using API Version  1
	I0815 00:28:03.637608   35940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:03.637924   35940 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:03.638104   35940 main.go:141] libmachine: (ha-863044-m02) Calling .GetState
	I0815 00:28:03.639665   35940 status.go:330] ha-863044-m02 host status = "Running" (err=<nil>)
	I0815 00:28:03.639680   35940 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:28:03.639979   35940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:03.640009   35940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:03.654721   35940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
	I0815 00:28:03.655078   35940 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:03.655559   35940 main.go:141] libmachine: Using API Version  1
	I0815 00:28:03.655582   35940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:03.655897   35940 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:03.656104   35940 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:28:03.658477   35940 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:28:03.658840   35940 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:28:03.658863   35940 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:28:03.658976   35940 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:28:03.659248   35940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:03.659285   35940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:03.673580   35940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36867
	I0815 00:28:03.674047   35940 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:03.674541   35940 main.go:141] libmachine: Using API Version  1
	I0815 00:28:03.674561   35940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:03.674895   35940 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:03.675054   35940 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:28:03.675263   35940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:03.675286   35940 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:28:03.678207   35940 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:28:03.678622   35940 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:28:03.678653   35940 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:28:03.678778   35940 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:28:03.678935   35940 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:28:03.679077   35940 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:28:03.679239   35940 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	W0815 00:28:03.904852   35940 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:28:03.904909   35940 retry.go:31] will retry after 335.078294ms: dial tcp 192.168.39.170:22: connect: no route to host
	W0815 00:28:07.296938   35940 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.170:22: connect: no route to host
	W0815 00:28:07.297021   35940 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	E0815 00:28:07.297034   35940 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:28:07.297040   35940 status.go:257] ha-863044-m02 status: &{Name:ha-863044-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 00:28:07.297060   35940 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:28:07.297070   35940 status.go:255] checking status of ha-863044-m03 ...
	I0815 00:28:07.297359   35940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:07.297404   35940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:07.311972   35940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I0815 00:28:07.312356   35940 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:07.312864   35940 main.go:141] libmachine: Using API Version  1
	I0815 00:28:07.312897   35940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:07.313191   35940 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:07.313380   35940 main.go:141] libmachine: (ha-863044-m03) Calling .GetState
	I0815 00:28:07.314914   35940 status.go:330] ha-863044-m03 host status = "Running" (err=<nil>)
	I0815 00:28:07.314929   35940 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:28:07.315218   35940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:07.315250   35940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:07.329967   35940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0815 00:28:07.330482   35940 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:07.330988   35940 main.go:141] libmachine: Using API Version  1
	I0815 00:28:07.331008   35940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:07.331268   35940 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:07.331435   35940 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:28:07.334185   35940 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:07.334585   35940 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:28:07.334629   35940 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:07.334769   35940 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:28:07.335089   35940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:07.335150   35940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:07.349626   35940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38013
	I0815 00:28:07.349999   35940 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:07.350383   35940 main.go:141] libmachine: Using API Version  1
	I0815 00:28:07.350404   35940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:07.350704   35940 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:07.350899   35940 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:28:07.351074   35940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:07.351096   35940 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:28:07.354001   35940 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:07.354442   35940 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:28:07.354470   35940 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:07.354782   35940 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:28:07.354971   35940 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:28:07.355132   35940 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:28:07.355277   35940 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:28:07.431703   35940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:28:07.446154   35940 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:28:07.446181   35940 api_server.go:166] Checking apiserver status ...
	I0815 00:28:07.446235   35940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:28:07.460636   35940 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	W0815 00:28:07.470006   35940 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:28:07.470067   35940 ssh_runner.go:195] Run: ls
	I0815 00:28:07.473878   35940 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:28:07.479503   35940 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:28:07.479525   35940 status.go:422] ha-863044-m03 apiserver status = Running (err=<nil>)
	I0815 00:28:07.479536   35940 status.go:257] ha-863044-m03 status: &{Name:ha-863044-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:28:07.479559   35940 status.go:255] checking status of ha-863044-m04 ...
	I0815 00:28:07.479889   35940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:07.479922   35940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:07.494652   35940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36043
	I0815 00:28:07.495059   35940 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:07.495503   35940 main.go:141] libmachine: Using API Version  1
	I0815 00:28:07.495523   35940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:07.495805   35940 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:07.495974   35940 main.go:141] libmachine: (ha-863044-m04) Calling .GetState
	I0815 00:28:07.497487   35940 status.go:330] ha-863044-m04 host status = "Running" (err=<nil>)
	I0815 00:28:07.497505   35940 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:28:07.497840   35940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:07.497870   35940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:07.512850   35940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33485
	I0815 00:28:07.513302   35940 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:07.513787   35940 main.go:141] libmachine: Using API Version  1
	I0815 00:28:07.513810   35940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:07.514128   35940 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:07.514289   35940 main.go:141] libmachine: (ha-863044-m04) Calling .GetIP
	I0815 00:28:07.516949   35940 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:07.517342   35940 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:28:07.517377   35940 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:07.517523   35940 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:28:07.517804   35940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:07.517836   35940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:07.531981   35940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I0815 00:28:07.532362   35940 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:07.532887   35940 main.go:141] libmachine: Using API Version  1
	I0815 00:28:07.532906   35940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:07.533306   35940 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:07.533511   35940 main.go:141] libmachine: (ha-863044-m04) Calling .DriverName
	I0815 00:28:07.533714   35940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:07.533735   35940 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHHostname
	I0815 00:28:07.536362   35940 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:07.536743   35940 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:28:07.536779   35940 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:07.536876   35940 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHPort
	I0815 00:28:07.537036   35940 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHKeyPath
	I0815 00:28:07.537179   35940 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHUsername
	I0815 00:28:07.537300   35940 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m04/id_rsa Username:docker}
	I0815 00:28:07.615229   35940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:28:07.629282   35940 status.go:257] ha-863044-m04 status: &{Name:ha-863044-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr: exit status 3 (3.701822654s)

                                                
                                                
-- stdout --
	ha-863044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-863044-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:28:13.680103   36057 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:28:13.680361   36057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:28:13.680371   36057 out.go:304] Setting ErrFile to fd 2...
	I0815 00:28:13.680376   36057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:28:13.680551   36057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:28:13.680752   36057 out.go:298] Setting JSON to false
	I0815 00:28:13.680775   36057 mustload.go:65] Loading cluster: ha-863044
	I0815 00:28:13.680875   36057 notify.go:220] Checking for updates...
	I0815 00:28:13.681279   36057 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:28:13.681299   36057 status.go:255] checking status of ha-863044 ...
	I0815 00:28:13.681761   36057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:13.681824   36057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:13.700775   36057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44699
	I0815 00:28:13.701130   36057 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:13.701649   36057 main.go:141] libmachine: Using API Version  1
	I0815 00:28:13.701677   36057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:13.701995   36057 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:13.702195   36057 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:28:13.703764   36057 status.go:330] ha-863044 host status = "Running" (err=<nil>)
	I0815 00:28:13.703777   36057 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:28:13.704052   36057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:13.704085   36057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:13.718290   36057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0815 00:28:13.718645   36057 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:13.719055   36057 main.go:141] libmachine: Using API Version  1
	I0815 00:28:13.719077   36057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:13.719392   36057 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:13.719549   36057 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:28:13.722051   36057 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:28:13.722467   36057 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:28:13.722490   36057 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:28:13.722636   36057 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:28:13.722931   36057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:13.722982   36057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:13.736874   36057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40509
	I0815 00:28:13.737200   36057 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:13.737600   36057 main.go:141] libmachine: Using API Version  1
	I0815 00:28:13.737634   36057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:13.737929   36057 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:13.738127   36057 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:28:13.738316   36057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:13.738344   36057 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:28:13.741157   36057 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:28:13.741590   36057 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:28:13.741609   36057 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:28:13.741803   36057 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:28:13.741957   36057 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:28:13.742092   36057 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:28:13.742224   36057 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:28:13.828176   36057 ssh_runner.go:195] Run: systemctl --version
	I0815 00:28:13.833755   36057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:28:13.848714   36057 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:28:13.848740   36057 api_server.go:166] Checking apiserver status ...
	I0815 00:28:13.848771   36057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:28:13.862924   36057 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup
	W0815 00:28:13.872415   36057 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:28:13.872469   36057 ssh_runner.go:195] Run: ls
	I0815 00:28:13.876797   36057 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:28:13.882823   36057 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:28:13.882853   36057 status.go:422] ha-863044 apiserver status = Running (err=<nil>)
	I0815 00:28:13.882866   36057 status.go:257] ha-863044 status: &{Name:ha-863044 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:28:13.882887   36057 status.go:255] checking status of ha-863044-m02 ...
	I0815 00:28:13.883187   36057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:13.883227   36057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:13.897917   36057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35631
	I0815 00:28:13.898310   36057 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:13.898765   36057 main.go:141] libmachine: Using API Version  1
	I0815 00:28:13.898783   36057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:13.899120   36057 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:13.899308   36057 main.go:141] libmachine: (ha-863044-m02) Calling .GetState
	I0815 00:28:13.900946   36057 status.go:330] ha-863044-m02 host status = "Running" (err=<nil>)
	I0815 00:28:13.900964   36057 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:28:13.901325   36057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:13.901363   36057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:13.917547   36057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33819
	I0815 00:28:13.917959   36057 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:13.918460   36057 main.go:141] libmachine: Using API Version  1
	I0815 00:28:13.918479   36057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:13.918770   36057 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:13.918980   36057 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:28:13.921962   36057 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:28:13.922533   36057 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:28:13.922567   36057 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:28:13.922761   36057 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:28:13.923041   36057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:13.923080   36057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:13.937900   36057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
	I0815 00:28:13.938345   36057 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:13.938821   36057 main.go:141] libmachine: Using API Version  1
	I0815 00:28:13.938842   36057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:13.939175   36057 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:13.939381   36057 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:28:13.939584   36057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:13.939609   36057 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:28:13.942311   36057 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:28:13.942747   36057 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:28:13.942774   36057 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:28:13.942915   36057 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:28:13.943083   36057 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:28:13.943200   36057 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:28:13.943381   36057 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	W0815 00:28:16.992920   36057 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.170:22: connect: no route to host
	W0815 00:28:16.993017   36057 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	E0815 00:28:16.993033   36057 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:28:16.993040   36057 status.go:257] ha-863044-m02 status: &{Name:ha-863044-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 00:28:16.993056   36057 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	I0815 00:28:16.993063   36057 status.go:255] checking status of ha-863044-m03 ...
	I0815 00:28:16.993365   36057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:16.993411   36057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:17.008093   36057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0815 00:28:17.008522   36057 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:17.009037   36057 main.go:141] libmachine: Using API Version  1
	I0815 00:28:17.009060   36057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:17.009381   36057 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:17.009603   36057 main.go:141] libmachine: (ha-863044-m03) Calling .GetState
	I0815 00:28:17.011305   36057 status.go:330] ha-863044-m03 host status = "Running" (err=<nil>)
	I0815 00:28:17.011320   36057 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:28:17.011634   36057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:17.011672   36057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:17.026210   36057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35347
	I0815 00:28:17.026656   36057 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:17.027138   36057 main.go:141] libmachine: Using API Version  1
	I0815 00:28:17.027165   36057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:17.027473   36057 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:17.027689   36057 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:28:17.030779   36057 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:17.031174   36057 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:28:17.031199   36057 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:17.031366   36057 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:28:17.031699   36057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:17.031736   36057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:17.046481   36057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43663
	I0815 00:28:17.046852   36057 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:17.047337   36057 main.go:141] libmachine: Using API Version  1
	I0815 00:28:17.047358   36057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:17.047637   36057 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:17.047833   36057 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:28:17.048084   36057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:17.048108   36057 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:28:17.050716   36057 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:17.051173   36057 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:28:17.051197   36057 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:17.051357   36057 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:28:17.051528   36057 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:28:17.051649   36057 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:28:17.051772   36057 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:28:17.127644   36057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:28:17.148852   36057 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:28:17.148875   36057 api_server.go:166] Checking apiserver status ...
	I0815 00:28:17.148913   36057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:28:17.169450   36057 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	W0815 00:28:17.182097   36057 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:28:17.182174   36057 ssh_runner.go:195] Run: ls
	I0815 00:28:17.186550   36057 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:28:17.191592   36057 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:28:17.191615   36057 status.go:422] ha-863044-m03 apiserver status = Running (err=<nil>)
	I0815 00:28:17.191631   36057 status.go:257] ha-863044-m03 status: &{Name:ha-863044-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:28:17.191645   36057 status.go:255] checking status of ha-863044-m04 ...
	I0815 00:28:17.191925   36057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:17.191961   36057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:17.207304   36057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0815 00:28:17.207715   36057 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:17.208186   36057 main.go:141] libmachine: Using API Version  1
	I0815 00:28:17.208202   36057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:17.208498   36057 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:17.208718   36057 main.go:141] libmachine: (ha-863044-m04) Calling .GetState
	I0815 00:28:17.210236   36057 status.go:330] ha-863044-m04 host status = "Running" (err=<nil>)
	I0815 00:28:17.210251   36057 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:28:17.210549   36057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:17.210588   36057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:17.224672   36057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34707
	I0815 00:28:17.225057   36057 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:17.225508   36057 main.go:141] libmachine: Using API Version  1
	I0815 00:28:17.225527   36057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:17.225811   36057 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:17.225974   36057 main.go:141] libmachine: (ha-863044-m04) Calling .GetIP
	I0815 00:28:17.228595   36057 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:17.228994   36057 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:28:17.229039   36057 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:17.229145   36057 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:28:17.229451   36057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:17.229506   36057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:17.243434   36057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0815 00:28:17.243804   36057 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:17.244284   36057 main.go:141] libmachine: Using API Version  1
	I0815 00:28:17.244375   36057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:17.244711   36057 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:17.244881   36057 main.go:141] libmachine: (ha-863044-m04) Calling .DriverName
	I0815 00:28:17.245041   36057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:17.245071   36057 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHHostname
	I0815 00:28:17.247389   36057 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:17.247848   36057 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:28:17.247875   36057 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:17.248001   36057 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHPort
	I0815 00:28:17.248155   36057 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHKeyPath
	I0815 00:28:17.248282   36057 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHUsername
	I0815 00:28:17.248429   36057 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m04/id_rsa Username:docker}
	I0815 00:28:17.327553   36057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:28:17.341034   36057 status.go:257] ha-863044-m04 status: &{Name:ha-863044-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr: exit status 7 (604.232998ms)

                                                
                                                
-- stdout --
	ha-863044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-863044-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:28:26.932201   36193 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:28:26.932310   36193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:28:26.932319   36193 out.go:304] Setting ErrFile to fd 2...
	I0815 00:28:26.932323   36193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:28:26.932525   36193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:28:26.932717   36193 out.go:298] Setting JSON to false
	I0815 00:28:26.932739   36193 mustload.go:65] Loading cluster: ha-863044
	I0815 00:28:26.932845   36193 notify.go:220] Checking for updates...
	I0815 00:28:26.933223   36193 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:28:26.933243   36193 status.go:255] checking status of ha-863044 ...
	I0815 00:28:26.933716   36193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:26.933769   36193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:26.951517   36193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I0815 00:28:26.952008   36193 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:26.952612   36193 main.go:141] libmachine: Using API Version  1
	I0815 00:28:26.952630   36193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:26.952943   36193 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:26.953122   36193 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:28:26.954660   36193 status.go:330] ha-863044 host status = "Running" (err=<nil>)
	I0815 00:28:26.954676   36193 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:28:26.955062   36193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:26.955103   36193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:26.969391   36193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I0815 00:28:26.969790   36193 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:26.970240   36193 main.go:141] libmachine: Using API Version  1
	I0815 00:28:26.970261   36193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:26.970582   36193 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:26.970773   36193 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:28:26.973532   36193 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:28:26.973893   36193 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:28:26.973911   36193 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:28:26.974060   36193 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:28:26.974333   36193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:26.974382   36193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:26.988863   36193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0815 00:28:26.989384   36193 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:26.989965   36193 main.go:141] libmachine: Using API Version  1
	I0815 00:28:26.989992   36193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:26.990383   36193 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:26.990561   36193 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:28:26.990769   36193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:26.990808   36193 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:28:26.993544   36193 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:28:26.993951   36193 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:28:26.993976   36193 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:28:26.994095   36193 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:28:26.994235   36193 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:28:26.994370   36193 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:28:26.994460   36193 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:28:27.080562   36193 ssh_runner.go:195] Run: systemctl --version
	I0815 00:28:27.086235   36193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:28:27.101201   36193 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:28:27.101231   36193 api_server.go:166] Checking apiserver status ...
	I0815 00:28:27.101264   36193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:28:27.114275   36193 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup
	W0815 00:28:27.123225   36193 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1099/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:28:27.123270   36193 ssh_runner.go:195] Run: ls
	I0815 00:28:27.127014   36193 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:28:27.132729   36193 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:28:27.132754   36193 status.go:422] ha-863044 apiserver status = Running (err=<nil>)
	I0815 00:28:27.132767   36193 status.go:257] ha-863044 status: &{Name:ha-863044 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:28:27.132795   36193 status.go:255] checking status of ha-863044-m02 ...
	I0815 00:28:27.133173   36193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:27.133216   36193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:27.147682   36193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I0815 00:28:27.148132   36193 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:27.148593   36193 main.go:141] libmachine: Using API Version  1
	I0815 00:28:27.148613   36193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:27.148950   36193 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:27.149136   36193 main.go:141] libmachine: (ha-863044-m02) Calling .GetState
	I0815 00:28:27.150712   36193 status.go:330] ha-863044-m02 host status = "Stopped" (err=<nil>)
	I0815 00:28:27.150728   36193 status.go:343] host is not running, skipping remaining checks
	I0815 00:28:27.150736   36193 status.go:257] ha-863044-m02 status: &{Name:ha-863044-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:28:27.150756   36193 status.go:255] checking status of ha-863044-m03 ...
	I0815 00:28:27.151163   36193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:27.151208   36193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:27.169108   36193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41889
	I0815 00:28:27.169446   36193 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:27.169875   36193 main.go:141] libmachine: Using API Version  1
	I0815 00:28:27.169891   36193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:27.170147   36193 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:27.170324   36193 main.go:141] libmachine: (ha-863044-m03) Calling .GetState
	I0815 00:28:27.172003   36193 status.go:330] ha-863044-m03 host status = "Running" (err=<nil>)
	I0815 00:28:27.172036   36193 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:28:27.172339   36193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:27.172375   36193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:27.186437   36193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42885
	I0815 00:28:27.186777   36193 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:27.187196   36193 main.go:141] libmachine: Using API Version  1
	I0815 00:28:27.187213   36193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:27.187521   36193 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:27.187708   36193 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:28:27.190381   36193 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:27.190763   36193 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:28:27.190800   36193 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:27.190868   36193 host.go:66] Checking if "ha-863044-m03" exists ...
	I0815 00:28:27.191161   36193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:27.191220   36193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:27.206296   36193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I0815 00:28:27.206856   36193 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:27.207449   36193 main.go:141] libmachine: Using API Version  1
	I0815 00:28:27.207474   36193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:27.207816   36193 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:27.208048   36193 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:28:27.208245   36193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:27.208264   36193 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:28:27.211170   36193 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:27.211631   36193 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:28:27.211714   36193 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:27.211927   36193 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:28:27.212086   36193 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:28:27.212223   36193 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:28:27.212359   36193 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:28:27.295423   36193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:28:27.311030   36193 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:28:27.311059   36193 api_server.go:166] Checking apiserver status ...
	I0815 00:28:27.311110   36193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:28:27.324697   36193 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	W0815 00:28:27.334057   36193 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:28:27.334105   36193 ssh_runner.go:195] Run: ls
	I0815 00:28:27.338026   36193 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:28:27.343487   36193 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:28:27.343505   36193 status.go:422] ha-863044-m03 apiserver status = Running (err=<nil>)
	I0815 00:28:27.343514   36193 status.go:257] ha-863044-m03 status: &{Name:ha-863044-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:28:27.343530   36193 status.go:255] checking status of ha-863044-m04 ...
	I0815 00:28:27.343865   36193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:27.343903   36193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:27.358340   36193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I0815 00:28:27.358724   36193 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:27.359152   36193 main.go:141] libmachine: Using API Version  1
	I0815 00:28:27.359171   36193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:27.359436   36193 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:27.359614   36193 main.go:141] libmachine: (ha-863044-m04) Calling .GetState
	I0815 00:28:27.361097   36193 status.go:330] ha-863044-m04 host status = "Running" (err=<nil>)
	I0815 00:28:27.361112   36193 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:28:27.361530   36193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:27.361570   36193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:27.376013   36193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0815 00:28:27.376410   36193 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:27.376961   36193 main.go:141] libmachine: Using API Version  1
	I0815 00:28:27.376981   36193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:27.377263   36193 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:27.377455   36193 main.go:141] libmachine: (ha-863044-m04) Calling .GetIP
	I0815 00:28:27.380233   36193 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:27.380695   36193 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:28:27.380724   36193 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:27.380830   36193 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:28:27.381121   36193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:27.381161   36193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:27.395419   36193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33743
	I0815 00:28:27.395838   36193 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:27.396332   36193 main.go:141] libmachine: Using API Version  1
	I0815 00:28:27.396347   36193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:27.396666   36193 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:27.396811   36193 main.go:141] libmachine: (ha-863044-m04) Calling .DriverName
	I0815 00:28:27.396992   36193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:28:27.397027   36193 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHHostname
	I0815 00:28:27.399828   36193 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:27.400261   36193 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:28:27.400314   36193 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:27.400727   36193 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHPort
	I0815 00:28:27.400890   36193 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHKeyPath
	I0815 00:28:27.401051   36193 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHUsername
	I0815 00:28:27.401401   36193 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m04/id_rsa Username:docker}
	I0815 00:28:27.480414   36193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:28:27.494222   36193 status.go:257] ha-863044-m04 status: &{Name:ha-863044-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-863044 -n ha-863044
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-863044 logs -n 25: (1.314728522s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044:/home/docker/cp-test_ha-863044-m03_ha-863044.txt                       |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044 sudo cat                                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m03_ha-863044.txt                                 |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m02:/home/docker/cp-test_ha-863044-m03_ha-863044-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m02 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m03_ha-863044-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04:/home/docker/cp-test_ha-863044-m03_ha-863044-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m04 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m03_ha-863044-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-863044 cp testdata/cp-test.txt                                                | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3188715365/001/cp-test_ha-863044-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044:/home/docker/cp-test_ha-863044-m04_ha-863044.txt                       |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044 sudo cat                                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m04_ha-863044.txt                                 |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m02:/home/docker/cp-test_ha-863044-m04_ha-863044-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m02 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m04_ha-863044-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03:/home/docker/cp-test_ha-863044-m04_ha-863044-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m03 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m04_ha-863044-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-863044 node stop m02 -v=7                                                     | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-863044 node start m02 -v=7                                                    | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:20:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:20:37.881748   30723 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:20:37.881988   30723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:20:37.881995   30723 out.go:304] Setting ErrFile to fd 2...
	I0815 00:20:37.881999   30723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:20:37.882201   30723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:20:37.882746   30723 out.go:298] Setting JSON to false
	I0815 00:20:37.883560   30723 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3783,"bootTime":1723677455,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:20:37.883615   30723 start.go:139] virtualization: kvm guest
	I0815 00:20:37.885864   30723 out.go:177] * [ha-863044] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:20:37.887153   30723 notify.go:220] Checking for updates...
	I0815 00:20:37.887173   30723 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:20:37.888629   30723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:20:37.890054   30723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:20:37.891426   30723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:20:37.892691   30723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:20:37.894038   30723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:20:37.895541   30723 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:20:37.930133   30723 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 00:20:37.931685   30723 start.go:297] selected driver: kvm2
	I0815 00:20:37.931696   30723 start.go:901] validating driver "kvm2" against <nil>
	I0815 00:20:37.931714   30723 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:20:37.932433   30723 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:20:37.932500   30723 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 00:20:37.947617   30723 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 00:20:37.947667   30723 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:20:37.947865   30723 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:20:37.947924   30723 cni.go:84] Creating CNI manager for ""
	I0815 00:20:37.947935   30723 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0815 00:20:37.947940   30723 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 00:20:37.947987   30723 start.go:340] cluster config:
	{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0815 00:20:37.948079   30723 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:20:37.949981   30723 out.go:177] * Starting "ha-863044" primary control-plane node in "ha-863044" cluster
	I0815 00:20:37.951405   30723 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:20:37.951428   30723 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:20:37.951435   30723 cache.go:56] Caching tarball of preloaded images
	I0815 00:20:37.951509   30723 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:20:37.951518   30723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:20:37.951836   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:20:37.951856   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json: {Name:mkc2ad5323f3c8995300a3bc69f9d801a70bd1b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:20:37.951994   30723 start.go:360] acquireMachinesLock for ha-863044: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 00:20:37.952020   30723 start.go:364] duration metric: took 14.311µs to acquireMachinesLock for "ha-863044"
	I0815 00:20:37.952035   30723 start.go:93] Provisioning new machine with config: &{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:20:37.952080   30723 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 00:20:37.953646   30723 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 00:20:37.953774   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:20:37.953808   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:20:37.967545   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42353
	I0815 00:20:37.967960   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:20:37.968468   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:20:37.968511   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:20:37.968850   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:20:37.969020   30723 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:20:37.969137   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:20:37.969263   30723 start.go:159] libmachine.API.Create for "ha-863044" (driver="kvm2")
	I0815 00:20:37.969294   30723 client.go:168] LocalClient.Create starting
	I0815 00:20:37.969328   30723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem
	I0815 00:20:37.969362   30723 main.go:141] libmachine: Decoding PEM data...
	I0815 00:20:37.969377   30723 main.go:141] libmachine: Parsing certificate...
	I0815 00:20:37.969430   30723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem
	I0815 00:20:37.969453   30723 main.go:141] libmachine: Decoding PEM data...
	I0815 00:20:37.969472   30723 main.go:141] libmachine: Parsing certificate...
	I0815 00:20:37.969493   30723 main.go:141] libmachine: Running pre-create checks...
	I0815 00:20:37.969502   30723 main.go:141] libmachine: (ha-863044) Calling .PreCreateCheck
	I0815 00:20:37.969775   30723 main.go:141] libmachine: (ha-863044) Calling .GetConfigRaw
	I0815 00:20:37.970350   30723 main.go:141] libmachine: Creating machine...
	I0815 00:20:37.970364   30723 main.go:141] libmachine: (ha-863044) Calling .Create
	I0815 00:20:37.970467   30723 main.go:141] libmachine: (ha-863044) Creating KVM machine...
	I0815 00:20:37.971753   30723 main.go:141] libmachine: (ha-863044) DBG | found existing default KVM network
	I0815 00:20:37.972324   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:37.972194   30746 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0815 00:20:37.972351   30723 main.go:141] libmachine: (ha-863044) DBG | created network xml: 
	I0815 00:20:37.972371   30723 main.go:141] libmachine: (ha-863044) DBG | <network>
	I0815 00:20:37.972382   30723 main.go:141] libmachine: (ha-863044) DBG |   <name>mk-ha-863044</name>
	I0815 00:20:37.972396   30723 main.go:141] libmachine: (ha-863044) DBG |   <dns enable='no'/>
	I0815 00:20:37.972405   30723 main.go:141] libmachine: (ha-863044) DBG |   
	I0815 00:20:37.972418   30723 main.go:141] libmachine: (ha-863044) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 00:20:37.972428   30723 main.go:141] libmachine: (ha-863044) DBG |     <dhcp>
	I0815 00:20:37.972440   30723 main.go:141] libmachine: (ha-863044) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 00:20:37.972457   30723 main.go:141] libmachine: (ha-863044) DBG |     </dhcp>
	I0815 00:20:37.972474   30723 main.go:141] libmachine: (ha-863044) DBG |   </ip>
	I0815 00:20:37.972484   30723 main.go:141] libmachine: (ha-863044) DBG |   
	I0815 00:20:37.972494   30723 main.go:141] libmachine: (ha-863044) DBG | </network>
	I0815 00:20:37.972503   30723 main.go:141] libmachine: (ha-863044) DBG | 
	I0815 00:20:37.977541   30723 main.go:141] libmachine: (ha-863044) DBG | trying to create private KVM network mk-ha-863044 192.168.39.0/24...
	I0815 00:20:38.042063   30723 main.go:141] libmachine: (ha-863044) DBG | private KVM network mk-ha-863044 192.168.39.0/24 created
	I0815 00:20:38.042092   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:38.042022   30746 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:20:38.042105   30723 main.go:141] libmachine: (ha-863044) Setting up store path in /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044 ...
	I0815 00:20:38.042118   30723 main.go:141] libmachine: (ha-863044) Building disk image from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 00:20:38.042177   30723 main.go:141] libmachine: (ha-863044) Downloading /home/jenkins/minikube-integration/19443-13088/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 00:20:38.290980   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:38.290871   30746 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa...
	I0815 00:20:38.474892   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:38.474766   30746 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/ha-863044.rawdisk...
	I0815 00:20:38.474942   30723 main.go:141] libmachine: (ha-863044) DBG | Writing magic tar header
	I0815 00:20:38.474957   30723 main.go:141] libmachine: (ha-863044) DBG | Writing SSH key tar header
	I0815 00:20:38.474968   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:38.474904   30746 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044 ...
	I0815 00:20:38.475113   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044
	I0815 00:20:38.475151   30723 main.go:141] libmachine: (ha-863044) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044 (perms=drwx------)
	I0815 00:20:38.475178   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines
	I0815 00:20:38.475190   30723 main.go:141] libmachine: (ha-863044) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines (perms=drwxr-xr-x)
	I0815 00:20:38.475200   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:20:38.475218   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088
	I0815 00:20:38.475230   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 00:20:38.475245   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home/jenkins
	I0815 00:20:38.475256   30723 main.go:141] libmachine: (ha-863044) DBG | Checking permissions on dir: /home
	I0815 00:20:38.475265   30723 main.go:141] libmachine: (ha-863044) DBG | Skipping /home - not owner
	I0815 00:20:38.475279   30723 main.go:141] libmachine: (ha-863044) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube (perms=drwxr-xr-x)
	I0815 00:20:38.475296   30723 main.go:141] libmachine: (ha-863044) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088 (perms=drwxrwxr-x)
	I0815 00:20:38.475306   30723 main.go:141] libmachine: (ha-863044) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 00:20:38.475316   30723 main.go:141] libmachine: (ha-863044) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 00:20:38.475326   30723 main.go:141] libmachine: (ha-863044) Creating domain...
	I0815 00:20:38.476266   30723 main.go:141] libmachine: (ha-863044) define libvirt domain using xml: 
	I0815 00:20:38.476283   30723 main.go:141] libmachine: (ha-863044) <domain type='kvm'>
	I0815 00:20:38.476303   30723 main.go:141] libmachine: (ha-863044)   <name>ha-863044</name>
	I0815 00:20:38.476312   30723 main.go:141] libmachine: (ha-863044)   <memory unit='MiB'>2200</memory>
	I0815 00:20:38.476320   30723 main.go:141] libmachine: (ha-863044)   <vcpu>2</vcpu>
	I0815 00:20:38.476327   30723 main.go:141] libmachine: (ha-863044)   <features>
	I0815 00:20:38.476331   30723 main.go:141] libmachine: (ha-863044)     <acpi/>
	I0815 00:20:38.476336   30723 main.go:141] libmachine: (ha-863044)     <apic/>
	I0815 00:20:38.476347   30723 main.go:141] libmachine: (ha-863044)     <pae/>
	I0815 00:20:38.476354   30723 main.go:141] libmachine: (ha-863044)     
	I0815 00:20:38.476363   30723 main.go:141] libmachine: (ha-863044)   </features>
	I0815 00:20:38.476377   30723 main.go:141] libmachine: (ha-863044)   <cpu mode='host-passthrough'>
	I0815 00:20:38.476394   30723 main.go:141] libmachine: (ha-863044)   
	I0815 00:20:38.476402   30723 main.go:141] libmachine: (ha-863044)   </cpu>
	I0815 00:20:38.476407   30723 main.go:141] libmachine: (ha-863044)   <os>
	I0815 00:20:38.476412   30723 main.go:141] libmachine: (ha-863044)     <type>hvm</type>
	I0815 00:20:38.476422   30723 main.go:141] libmachine: (ha-863044)     <boot dev='cdrom'/>
	I0815 00:20:38.476428   30723 main.go:141] libmachine: (ha-863044)     <boot dev='hd'/>
	I0815 00:20:38.476438   30723 main.go:141] libmachine: (ha-863044)     <bootmenu enable='no'/>
	I0815 00:20:38.476444   30723 main.go:141] libmachine: (ha-863044)   </os>
	I0815 00:20:38.476453   30723 main.go:141] libmachine: (ha-863044)   <devices>
	I0815 00:20:38.476461   30723 main.go:141] libmachine: (ha-863044)     <disk type='file' device='cdrom'>
	I0815 00:20:38.476472   30723 main.go:141] libmachine: (ha-863044)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/boot2docker.iso'/>
	I0815 00:20:38.476480   30723 main.go:141] libmachine: (ha-863044)       <target dev='hdc' bus='scsi'/>
	I0815 00:20:38.476485   30723 main.go:141] libmachine: (ha-863044)       <readonly/>
	I0815 00:20:38.476489   30723 main.go:141] libmachine: (ha-863044)     </disk>
	I0815 00:20:38.476511   30723 main.go:141] libmachine: (ha-863044)     <disk type='file' device='disk'>
	I0815 00:20:38.476534   30723 main.go:141] libmachine: (ha-863044)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 00:20:38.476552   30723 main.go:141] libmachine: (ha-863044)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/ha-863044.rawdisk'/>
	I0815 00:20:38.476561   30723 main.go:141] libmachine: (ha-863044)       <target dev='hda' bus='virtio'/>
	I0815 00:20:38.476571   30723 main.go:141] libmachine: (ha-863044)     </disk>
	I0815 00:20:38.476581   30723 main.go:141] libmachine: (ha-863044)     <interface type='network'>
	I0815 00:20:38.476592   30723 main.go:141] libmachine: (ha-863044)       <source network='mk-ha-863044'/>
	I0815 00:20:38.476602   30723 main.go:141] libmachine: (ha-863044)       <model type='virtio'/>
	I0815 00:20:38.476619   30723 main.go:141] libmachine: (ha-863044)     </interface>
	I0815 00:20:38.476630   30723 main.go:141] libmachine: (ha-863044)     <interface type='network'>
	I0815 00:20:38.476666   30723 main.go:141] libmachine: (ha-863044)       <source network='default'/>
	I0815 00:20:38.476692   30723 main.go:141] libmachine: (ha-863044)       <model type='virtio'/>
	I0815 00:20:38.476703   30723 main.go:141] libmachine: (ha-863044)     </interface>
	I0815 00:20:38.476714   30723 main.go:141] libmachine: (ha-863044)     <serial type='pty'>
	I0815 00:20:38.476722   30723 main.go:141] libmachine: (ha-863044)       <target port='0'/>
	I0815 00:20:38.476732   30723 main.go:141] libmachine: (ha-863044)     </serial>
	I0815 00:20:38.476742   30723 main.go:141] libmachine: (ha-863044)     <console type='pty'>
	I0815 00:20:38.476753   30723 main.go:141] libmachine: (ha-863044)       <target type='serial' port='0'/>
	I0815 00:20:38.476763   30723 main.go:141] libmachine: (ha-863044)     </console>
	I0815 00:20:38.476773   30723 main.go:141] libmachine: (ha-863044)     <rng model='virtio'>
	I0815 00:20:38.476784   30723 main.go:141] libmachine: (ha-863044)       <backend model='random'>/dev/random</backend>
	I0815 00:20:38.476793   30723 main.go:141] libmachine: (ha-863044)     </rng>
	I0815 00:20:38.476801   30723 main.go:141] libmachine: (ha-863044)     
	I0815 00:20:38.476809   30723 main.go:141] libmachine: (ha-863044)     
	I0815 00:20:38.476818   30723 main.go:141] libmachine: (ha-863044)   </devices>
	I0815 00:20:38.476850   30723 main.go:141] libmachine: (ha-863044) </domain>
	I0815 00:20:38.476861   30723 main.go:141] libmachine: (ha-863044) 
	I0815 00:20:38.480820   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:d5:7c:0d in network default
	I0815 00:20:38.481370   30723 main.go:141] libmachine: (ha-863044) Ensuring networks are active...
	I0815 00:20:38.481385   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:38.482026   30723 main.go:141] libmachine: (ha-863044) Ensuring network default is active
	I0815 00:20:38.482381   30723 main.go:141] libmachine: (ha-863044) Ensuring network mk-ha-863044 is active
	I0815 00:20:38.482835   30723 main.go:141] libmachine: (ha-863044) Getting domain xml...
	I0815 00:20:38.483552   30723 main.go:141] libmachine: (ha-863044) Creating domain...
	I0815 00:20:39.661795   30723 main.go:141] libmachine: (ha-863044) Waiting to get IP...
	I0815 00:20:39.662578   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:39.662947   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:39.662977   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:39.662923   30746 retry.go:31] will retry after 276.183296ms: waiting for machine to come up
	I0815 00:20:39.940317   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:39.940830   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:39.940854   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:39.940780   30746 retry.go:31] will retry after 340.971065ms: waiting for machine to come up
	I0815 00:20:40.283459   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:40.283896   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:40.283923   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:40.283849   30746 retry.go:31] will retry after 409.225445ms: waiting for machine to come up
	I0815 00:20:40.694512   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:40.694967   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:40.694995   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:40.694914   30746 retry.go:31] will retry after 440.059085ms: waiting for machine to come up
	I0815 00:20:41.136412   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:41.136843   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:41.136870   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:41.136804   30746 retry.go:31] will retry after 677.697429ms: waiting for machine to come up
	I0815 00:20:41.815715   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:41.816087   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:41.816111   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:41.816049   30746 retry.go:31] will retry after 694.446796ms: waiting for machine to come up
	I0815 00:20:42.511865   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:42.512309   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:42.512343   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:42.512273   30746 retry.go:31] will retry after 1.147726516s: waiting for machine to come up
	I0815 00:20:43.661329   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:43.661883   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:43.661913   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:43.661833   30746 retry.go:31] will retry after 1.094040829s: waiting for machine to come up
	I0815 00:20:44.757629   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:44.758099   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:44.758128   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:44.758042   30746 retry.go:31] will retry after 1.277852484s: waiting for machine to come up
	I0815 00:20:46.037289   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:46.037687   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:46.037735   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:46.037659   30746 retry.go:31] will retry after 1.561255826s: waiting for machine to come up
	I0815 00:20:47.601481   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:47.601960   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:47.601989   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:47.601914   30746 retry.go:31] will retry after 2.267168102s: waiting for machine to come up
	I0815 00:20:49.871062   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:49.871453   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:49.871481   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:49.871403   30746 retry.go:31] will retry after 2.480250796s: waiting for machine to come up
	I0815 00:20:52.354878   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:52.355276   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:52.355319   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:52.355209   30746 retry.go:31] will retry after 4.383643095s: waiting for machine to come up
	I0815 00:20:56.742910   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:20:56.743240   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find current IP address of domain ha-863044 in network mk-ha-863044
	I0815 00:20:56.743266   30723 main.go:141] libmachine: (ha-863044) DBG | I0815 00:20:56.743189   30746 retry.go:31] will retry after 5.191918682s: waiting for machine to come up
	I0815 00:21:01.937574   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:01.938054   30723 main.go:141] libmachine: (ha-863044) Found IP for machine: 192.168.39.6
	I0815 00:21:01.938082   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has current primary IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:01.938091   30723 main.go:141] libmachine: (ha-863044) Reserving static IP address...
	I0815 00:21:01.938429   30723 main.go:141] libmachine: (ha-863044) DBG | unable to find host DHCP lease matching {name: "ha-863044", mac: "52:54:00:32:35:5d", ip: "192.168.39.6"} in network mk-ha-863044
	I0815 00:21:02.005802   30723 main.go:141] libmachine: (ha-863044) DBG | Getting to WaitForSSH function...
	I0815 00:21:02.005829   30723 main.go:141] libmachine: (ha-863044) Reserved static IP address: 192.168.39.6
	I0815 00:21:02.005843   30723 main.go:141] libmachine: (ha-863044) Waiting for SSH to be available...
	I0815 00:21:02.008469   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.008856   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:minikube Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.008879   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.009038   30723 main.go:141] libmachine: (ha-863044) DBG | Using SSH client type: external
	I0815 00:21:02.009061   30723 main.go:141] libmachine: (ha-863044) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa (-rw-------)
	I0815 00:21:02.009100   30723 main.go:141] libmachine: (ha-863044) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 00:21:02.009115   30723 main.go:141] libmachine: (ha-863044) DBG | About to run SSH command:
	I0815 00:21:02.009127   30723 main.go:141] libmachine: (ha-863044) DBG | exit 0
	I0815 00:21:02.136234   30723 main.go:141] libmachine: (ha-863044) DBG | SSH cmd err, output: <nil>: 
	I0815 00:21:02.136566   30723 main.go:141] libmachine: (ha-863044) KVM machine creation complete!
	I0815 00:21:02.136837   30723 main.go:141] libmachine: (ha-863044) Calling .GetConfigRaw
	I0815 00:21:02.137355   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:02.137542   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:02.137718   30723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 00:21:02.137737   30723 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:21:02.138897   30723 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 00:21:02.138909   30723 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 00:21:02.138914   30723 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 00:21:02.138920   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.140964   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.141278   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.141316   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.141373   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:02.141518   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.141679   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.141849   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:02.142002   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:02.142176   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:21:02.142185   30723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 00:21:02.251373   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:21:02.251394   30723 main.go:141] libmachine: Detecting the provisioner...
	I0815 00:21:02.251401   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.253724   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.254037   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.254065   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.254187   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:02.254356   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.254518   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.254662   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:02.254902   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:02.255082   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:21:02.255092   30723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 00:21:02.364705   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 00:21:02.364790   30723 main.go:141] libmachine: found compatible host: buildroot
	I0815 00:21:02.364801   30723 main.go:141] libmachine: Provisioning with buildroot...
	I0815 00:21:02.364808   30723 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:21:02.365023   30723 buildroot.go:166] provisioning hostname "ha-863044"
	I0815 00:21:02.365045   30723 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:21:02.365234   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.367819   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.368129   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.368147   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.368314   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:02.368539   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.368686   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.368797   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:02.368929   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:02.369080   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:21:02.369091   30723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863044 && echo "ha-863044" | sudo tee /etc/hostname
	I0815 00:21:02.494243   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863044
	
	I0815 00:21:02.494268   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.497012   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.497355   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.497386   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.497557   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:02.497720   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.497856   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.497991   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:02.498199   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:02.498412   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:21:02.498431   30723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863044/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:21:02.616370   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:21:02.616398   30723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 00:21:02.616417   30723 buildroot.go:174] setting up certificates
	I0815 00:21:02.616425   30723 provision.go:84] configureAuth start
	I0815 00:21:02.616433   30723 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:21:02.616703   30723 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:21:02.619259   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.619551   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.619574   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.619707   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.621625   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.621917   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.621940   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.622036   30723 provision.go:143] copyHostCerts
	I0815 00:21:02.622065   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:21:02.622090   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 00:21:02.622105   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:21:02.622168   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 00:21:02.622264   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:21:02.622283   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 00:21:02.622289   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:21:02.622315   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 00:21:02.622376   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:21:02.622391   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 00:21:02.622395   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:21:02.622416   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 00:21:02.622472   30723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.ha-863044 san=[127.0.0.1 192.168.39.6 ha-863044 localhost minikube]
	I0815 00:21:02.682385   30723 provision.go:177] copyRemoteCerts
	I0815 00:21:02.682445   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:21:02.682469   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.684881   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.685194   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.685216   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.685396   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:02.685565   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.685694   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:02.685821   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:02.770543   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 00:21:02.770622   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:21:02.792790   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 00:21:02.792855   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 00:21:02.815892   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 00:21:02.815971   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 00:21:02.837522   30723 provision.go:87] duration metric: took 221.084548ms to configureAuth
	I0815 00:21:02.837555   30723 buildroot.go:189] setting minikube options for container-runtime
	I0815 00:21:02.837712   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:21:02.837781   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:02.840096   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.840433   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:02.840458   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:02.840559   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:02.840739   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.840893   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:02.841013   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:02.841119   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:02.841304   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:21:02.841325   30723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:21:03.101543   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:21:03.101578   30723 main.go:141] libmachine: Checking connection to Docker...
	I0815 00:21:03.101589   30723 main.go:141] libmachine: (ha-863044) Calling .GetURL
	I0815 00:21:03.103042   30723 main.go:141] libmachine: (ha-863044) DBG | Using libvirt version 6000000
	I0815 00:21:03.105226   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.105597   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.105638   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.105935   30723 main.go:141] libmachine: Docker is up and running!
	I0815 00:21:03.105948   30723 main.go:141] libmachine: Reticulating splines...
	I0815 00:21:03.105954   30723 client.go:171] duration metric: took 25.136652037s to LocalClient.Create
	I0815 00:21:03.105976   30723 start.go:167] duration metric: took 25.136714259s to libmachine.API.Create "ha-863044"
	I0815 00:21:03.105990   30723 start.go:293] postStartSetup for "ha-863044" (driver="kvm2")
	I0815 00:21:03.106001   30723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:21:03.106024   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:03.106229   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:21:03.106252   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:03.108382   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.108765   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.108797   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.108909   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:03.109070   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:03.109213   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:03.109423   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:03.194188   30723 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:21:03.198338   30723 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 00:21:03.198369   30723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 00:21:03.198449   30723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 00:21:03.198542   30723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 00:21:03.198554   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /etc/ssl/certs/202792.pem
	I0815 00:21:03.198643   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 00:21:03.207701   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:21:03.228996   30723 start.go:296] duration metric: took 122.994267ms for postStartSetup
	I0815 00:21:03.229035   30723 main.go:141] libmachine: (ha-863044) Calling .GetConfigRaw
	I0815 00:21:03.229627   30723 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:21:03.232115   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.232410   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.232435   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.232756   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:21:03.232953   30723 start.go:128] duration metric: took 25.280860151s to createHost
	I0815 00:21:03.232975   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:03.235077   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.235386   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.235412   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.235519   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:03.235689   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:03.235842   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:03.235958   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:03.236077   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:03.236256   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:21:03.236279   30723 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 00:21:03.344631   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723681263.324116942
	
	I0815 00:21:03.344665   30723 fix.go:216] guest clock: 1723681263.324116942
	I0815 00:21:03.344674   30723 fix.go:229] Guest: 2024-08-15 00:21:03.324116942 +0000 UTC Remote: 2024-08-15 00:21:03.232965678 +0000 UTC m=+25.385115084 (delta=91.151264ms)
	I0815 00:21:03.344710   30723 fix.go:200] guest clock delta is within tolerance: 91.151264ms
	I0815 00:21:03.344720   30723 start.go:83] releasing machines lock for "ha-863044", held for 25.392691668s
	I0815 00:21:03.344743   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:03.345004   30723 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:21:03.347482   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.347795   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.347821   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.347923   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:03.348404   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:03.348551   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:03.348648   30723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:21:03.348715   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:03.348723   30723 ssh_runner.go:195] Run: cat /version.json
	I0815 00:21:03.348737   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:03.350881   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.351228   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.351255   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.351278   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.351320   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:03.351512   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:03.351569   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:03.351594   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:03.351655   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:03.351721   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:03.351797   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:03.351869   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:03.351967   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:03.352115   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:03.433522   30723 ssh_runner.go:195] Run: systemctl --version
	I0815 00:21:03.466093   30723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:21:03.619012   30723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 00:21:03.624678   30723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 00:21:03.624728   30723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:21:03.640029   30723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 00:21:03.640044   30723 start.go:495] detecting cgroup driver to use...
	I0815 00:21:03.640090   30723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:21:03.655169   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:21:03.667440   30723 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:21:03.667479   30723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:21:03.679720   30723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:21:03.692116   30723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:21:03.801801   30723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:21:03.955057   30723 docker.go:233] disabling docker service ...
	I0815 00:21:03.955114   30723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:21:03.968149   30723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:21:03.979905   30723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:21:04.095537   30723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:21:04.216331   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:21:04.230503   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:21:04.247875   30723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:21:04.247944   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.258217   30723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:21:04.258281   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.267758   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.276984   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.285989   30723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:21:04.295369   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.304416   30723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.319501   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:04.329626   30723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:21:04.338999   30723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 00:21:04.339049   30723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 00:21:04.351366   30723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:21:04.360028   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:21:04.472934   30723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:21:04.607975   30723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:21:04.608063   30723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:21:04.613012   30723 start.go:563] Will wait 60s for crictl version
	I0815 00:21:04.613054   30723 ssh_runner.go:195] Run: which crictl
	I0815 00:21:04.616396   30723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:21:04.656063   30723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 00:21:04.656156   30723 ssh_runner.go:195] Run: crio --version
	I0815 00:21:04.686776   30723 ssh_runner.go:195] Run: crio --version
	I0815 00:21:04.717881   30723 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 00:21:04.718992   30723 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:21:04.721533   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:04.721792   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:04.721824   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:04.721999   30723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 00:21:04.725839   30723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:21:04.739377   30723 kubeadm.go:883] updating cluster {Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:21:04.739515   30723 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:21:04.739573   30723 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:21:04.773569   30723 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 00:21:04.773642   30723 ssh_runner.go:195] Run: which lz4
	I0815 00:21:04.777366   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0815 00:21:04.777466   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 00:21:04.781342   30723 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 00:21:04.781373   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 00:21:05.969606   30723 crio.go:462] duration metric: took 1.192161234s to copy over tarball
	I0815 00:21:05.969672   30723 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 00:21:07.918007   30723 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.948304703s)
	I0815 00:21:07.918039   30723 crio.go:469] duration metric: took 1.948406345s to extract the tarball
	I0815 00:21:07.918049   30723 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 00:21:07.954630   30723 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:21:07.995361   30723 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:21:07.995385   30723 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:21:07.995394   30723 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.31.0 crio true true} ...
	I0815 00:21:07.995513   30723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863044 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:21:07.995608   30723 ssh_runner.go:195] Run: crio config
	I0815 00:21:08.039497   30723 cni.go:84] Creating CNI manager for ""
	I0815 00:21:08.039518   30723 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 00:21:08.039528   30723 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:21:08.039555   30723 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-863044 NodeName:ha-863044 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:21:08.039677   30723 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-863044"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:21:08.039698   30723 kube-vip.go:115] generating kube-vip config ...
	I0815 00:21:08.039740   30723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 00:21:08.054395   30723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 00:21:08.054570   30723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0815 00:21:08.054629   30723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:21:08.064446   30723 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:21:08.064522   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 00:21:08.072777   30723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0815 00:21:08.086979   30723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:21:08.101588   30723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0815 00:21:08.115839   30723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0815 00:21:08.129970   30723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 00:21:08.133232   30723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:21:08.143442   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:21:08.249523   30723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:21:08.265025   30723 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044 for IP: 192.168.39.6
	I0815 00:21:08.265041   30723 certs.go:194] generating shared ca certs ...
	I0815 00:21:08.265058   30723 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.265234   30723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 00:21:08.265302   30723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 00:21:08.265317   30723 certs.go:256] generating profile certs ...
	I0815 00:21:08.265386   30723 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key
	I0815 00:21:08.265402   30723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.crt with IP's: []
	I0815 00:21:08.485903   30723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.crt ...
	I0815 00:21:08.485937   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.crt: {Name:mk852256948a32d4c87a5e18722bfc8c23ec9719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.486136   30723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key ...
	I0815 00:21:08.486150   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key: {Name:mk1a22c6ac652160a7de25f3603d049244701baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.486254   30723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.1b81b6e8
	I0815 00:21:08.486273   30723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.1b81b6e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.254]
	I0815 00:21:08.567621   30723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.1b81b6e8 ...
	I0815 00:21:08.567652   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.1b81b6e8: {Name:mk14b63d91ccee3ec4cca025aabfdc68aaf70a88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.567825   30723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.1b81b6e8 ...
	I0815 00:21:08.567840   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.1b81b6e8: {Name:mkbbc89093724d7eaf1c152c604b902a33bb344d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.567934   30723 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.1b81b6e8 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt
	I0815 00:21:08.568040   30723 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.1b81b6e8 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key
	I0815 00:21:08.568125   30723 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key
	I0815 00:21:08.568144   30723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt with IP's: []
	I0815 00:21:08.703605   30723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt ...
	I0815 00:21:08.703635   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt: {Name:mkac43649e9a87f80a604ef4572c3441e99afc63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.703802   30723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key ...
	I0815 00:21:08.703815   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key: {Name:mk20457fff8d55d19661ee46633906c40d27707f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:08.703909   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 00:21:08.703931   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 00:21:08.703947   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 00:21:08.703966   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 00:21:08.703984   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 00:21:08.704002   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 00:21:08.704018   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 00:21:08.704035   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 00:21:08.704097   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 00:21:08.704142   30723 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 00:21:08.704155   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:21:08.704188   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:21:08.704221   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:21:08.704254   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 00:21:08.704308   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:21:08.704354   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem -> /usr/share/ca-certificates/20279.pem
	I0815 00:21:08.704382   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /usr/share/ca-certificates/202792.pem
	I0815 00:21:08.704399   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:08.704965   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:21:08.728196   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:21:08.749792   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:21:08.770633   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:21:08.791470   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 00:21:08.812005   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:21:08.833554   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:21:08.854902   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 00:21:08.875847   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 00:21:08.896234   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 00:21:08.917767   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:21:08.938830   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:21:08.953609   30723 ssh_runner.go:195] Run: openssl version
	I0815 00:21:08.958681   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 00:21:08.967911   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 00:21:08.971605   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 00:21:08.971644   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 00:21:08.976665   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 00:21:08.985881   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 00:21:08.995036   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 00:21:08.998744   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 00:21:08.998805   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 00:21:09.003785   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 00:21:09.015972   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:21:09.031850   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:09.036378   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:09.036442   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:09.043443   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:21:09.057791   30723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:21:09.062303   30723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:21:09.062347   30723 kubeadm.go:392] StartCluster: {Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:21:09.062415   30723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 00:21:09.062473   30723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:21:09.105164   30723 cri.go:89] found id: ""
	I0815 00:21:09.105237   30723 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 00:21:09.114154   30723 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 00:21:09.122721   30723 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 00:21:09.131084   30723 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 00:21:09.131101   30723 kubeadm.go:157] found existing configuration files:
	
	I0815 00:21:09.131144   30723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 00:21:09.139015   30723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 00:21:09.139074   30723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 00:21:09.147288   30723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 00:21:09.155460   30723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 00:21:09.155514   30723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 00:21:09.163812   30723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 00:21:09.172354   30723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 00:21:09.172393   30723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 00:21:09.180596   30723 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 00:21:09.188410   30723 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 00:21:09.188462   30723 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 00:21:09.196726   30723 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 00:21:09.298392   30723 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 00:21:09.298493   30723 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 00:21:09.390465   30723 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 00:21:09.390578   30723 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 00:21:09.390720   30723 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 00:21:09.400023   30723 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 00:21:09.402780   30723 out.go:204]   - Generating certificates and keys ...
	I0815 00:21:09.402867   30723 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 00:21:09.402924   30723 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 00:21:09.726623   30723 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 00:21:09.822504   30723 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 00:21:09.906086   30723 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 00:21:10.322395   30723 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 00:21:10.435919   30723 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 00:21:10.436076   30723 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-863044 localhost] and IPs [192.168.39.6 127.0.0.1 ::1]
	I0815 00:21:10.824872   30723 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 00:21:10.825171   30723 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-863044 localhost] and IPs [192.168.39.6 127.0.0.1 ::1]
	I0815 00:21:10.943003   30723 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 00:21:11.019310   30723 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 00:21:11.180466   30723 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 00:21:11.180742   30723 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 00:21:11.526821   30723 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 00:21:11.916049   30723 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 00:21:12.107671   30723 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 00:21:12.205597   30723 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 00:21:12.311189   30723 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 00:21:12.311883   30723 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 00:21:12.315179   30723 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 00:21:12.356392   30723 out.go:204]   - Booting up control plane ...
	I0815 00:21:12.356554   30723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 00:21:12.356740   30723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 00:21:12.356854   30723 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 00:21:12.357050   30723 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 00:21:12.357176   30723 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 00:21:12.357257   30723 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 00:21:12.486137   30723 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 00:21:12.486285   30723 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 00:21:12.987140   30723 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.454775ms
	I0815 00:21:12.987229   30723 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 00:21:18.942567   30723 kubeadm.go:310] [api-check] The API server is healthy after 5.958117383s
	I0815 00:21:18.954188   30723 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 00:21:18.966016   30723 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 00:21:19.498724   30723 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 00:21:19.498879   30723 kubeadm.go:310] [mark-control-plane] Marking the node ha-863044 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 00:21:19.509045   30723 kubeadm.go:310] [bootstrap-token] Using token: 3imy80.4d17q2wqt4vy2b7n
	I0815 00:21:19.510302   30723 out.go:204]   - Configuring RBAC rules ...
	I0815 00:21:19.510411   30723 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 00:21:19.519698   30723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 00:21:19.530551   30723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 00:21:19.536265   30723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 00:21:19.540018   30723 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 00:21:19.543648   30723 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 00:21:19.560630   30723 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 00:21:19.784650   30723 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 00:21:20.349712   30723 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 00:21:20.349732   30723 kubeadm.go:310] 
	I0815 00:21:20.349804   30723 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 00:21:20.349812   30723 kubeadm.go:310] 
	I0815 00:21:20.349914   30723 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 00:21:20.349934   30723 kubeadm.go:310] 
	I0815 00:21:20.349960   30723 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 00:21:20.350022   30723 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 00:21:20.350098   30723 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 00:21:20.350108   30723 kubeadm.go:310] 
	I0815 00:21:20.350182   30723 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 00:21:20.350192   30723 kubeadm.go:310] 
	I0815 00:21:20.350251   30723 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 00:21:20.350261   30723 kubeadm.go:310] 
	I0815 00:21:20.350323   30723 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 00:21:20.350431   30723 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 00:21:20.350520   30723 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 00:21:20.350529   30723 kubeadm.go:310] 
	I0815 00:21:20.350648   30723 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 00:21:20.350757   30723 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 00:21:20.350773   30723 kubeadm.go:310] 
	I0815 00:21:20.350876   30723 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3imy80.4d17q2wqt4vy2b7n \
	I0815 00:21:20.351020   30723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 00:21:20.351050   30723 kubeadm.go:310] 	--control-plane 
	I0815 00:21:20.351058   30723 kubeadm.go:310] 
	I0815 00:21:20.351178   30723 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 00:21:20.351188   30723 kubeadm.go:310] 
	I0815 00:21:20.351297   30723 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3imy80.4d17q2wqt4vy2b7n \
	I0815 00:21:20.351436   30723 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 00:21:20.352164   30723 kubeadm.go:310] W0815 00:21:09.278548     852 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:21:20.352563   30723 kubeadm.go:310] W0815 00:21:09.281384     852 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 00:21:20.352720   30723 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 00:21:20.352734   30723 cni.go:84] Creating CNI manager for ""
	I0815 00:21:20.352740   30723 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 00:21:20.354542   30723 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 00:21:20.355895   30723 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 00:21:20.360879   30723 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 00:21:20.360897   30723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 00:21:20.380859   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 00:21:20.736755   30723 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 00:21:20.736885   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-863044 minikube.k8s.io/updated_at=2024_08_15T00_21_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=ha-863044 minikube.k8s.io/primary=true
	I0815 00:21:20.736895   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:20.762239   30723 ops.go:34] apiserver oom_adj: -16
	I0815 00:21:20.891351   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:21.391488   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:21.891422   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:22.392140   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:22.892374   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:23.391423   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:23.892112   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:24.391692   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 00:21:24.515446   30723 kubeadm.go:1113] duration metric: took 3.778599805s to wait for elevateKubeSystemPrivileges
	I0815 00:21:24.515482   30723 kubeadm.go:394] duration metric: took 15.453137418s to StartCluster
	I0815 00:21:24.515502   30723 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:24.515571   30723 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:21:24.516397   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:24.516624   30723 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:21:24.516674   30723 start.go:241] waiting for startup goroutines ...
	I0815 00:21:24.516638   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 00:21:24.516672   30723 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 00:21:24.516753   30723 addons.go:69] Setting storage-provisioner=true in profile "ha-863044"
	I0815 00:21:24.516783   30723 addons.go:234] Setting addon storage-provisioner=true in "ha-863044"
	I0815 00:21:24.516782   30723 addons.go:69] Setting default-storageclass=true in profile "ha-863044"
	I0815 00:21:24.516812   30723 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-863044"
	I0815 00:21:24.516839   30723 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:21:24.517236   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:21:24.517312   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:24.517341   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:24.517417   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:24.517489   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:24.531778   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33039
	I0815 00:21:24.532107   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0815 00:21:24.532181   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:24.532554   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:24.532726   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:24.532749   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:24.533066   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:24.533083   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:24.533101   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:24.533293   30723 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:21:24.533377   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:24.533932   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:24.533961   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:24.535328   30723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:21:24.535676   30723 kapi.go:59] client config for ha-863044: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.crt", KeyFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key", CAFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 00:21:24.536188   30723 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 00:21:24.536478   30723 addons.go:234] Setting addon default-storageclass=true in "ha-863044"
	I0815 00:21:24.536519   30723 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:21:24.536896   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:24.536938   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:24.549472   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0815 00:21:24.549944   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:24.550465   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:24.550490   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:24.550732   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
	I0815 00:21:24.550815   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:24.550974   30723 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:21:24.551148   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:24.551573   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:24.551595   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:24.551893   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:24.552322   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:24.552362   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:24.552586   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:24.554346   30723 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 00:21:24.555673   30723 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:21:24.555691   30723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 00:21:24.555712   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:24.558336   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:24.558682   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:24.558698   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:24.558836   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:24.558999   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:24.559168   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:24.559279   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:24.567350   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0815 00:21:24.567673   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:24.568039   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:24.568052   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:24.568338   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:24.568453   30723 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:21:24.570006   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:24.570189   30723 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 00:21:24.570202   30723 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 00:21:24.570218   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:24.572529   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:24.572873   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:24.572894   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:24.573005   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:24.573166   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:24.573302   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:24.573420   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:24.687201   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 00:21:24.734629   30723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 00:21:24.758862   30723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 00:21:25.147476   30723 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 00:21:25.147499   30723 main.go:141] libmachine: Making call to close driver server
	I0815 00:21:25.147511   30723 main.go:141] libmachine: (ha-863044) Calling .Close
	I0815 00:21:25.147794   30723 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:21:25.147810   30723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:21:25.147817   30723 main.go:141] libmachine: Making call to close driver server
	I0815 00:21:25.147824   30723 main.go:141] libmachine: (ha-863044) Calling .Close
	I0815 00:21:25.147828   30723 main.go:141] libmachine: (ha-863044) DBG | Closing plugin on server side
	I0815 00:21:25.148020   30723 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:21:25.148028   30723 main.go:141] libmachine: (ha-863044) DBG | Closing plugin on server side
	I0815 00:21:25.148032   30723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:21:25.148084   30723 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 00:21:25.148102   30723 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 00:21:25.148183   30723 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0815 00:21:25.148193   30723 round_trippers.go:469] Request Headers:
	I0815 00:21:25.148204   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:21:25.148211   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:21:25.155664   30723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 00:21:25.156491   30723 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0815 00:21:25.156510   30723 round_trippers.go:469] Request Headers:
	I0815 00:21:25.156524   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:21:25.156537   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:21:25.156543   30723 round_trippers.go:473]     Content-Type: application/json
	I0815 00:21:25.158831   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:21:25.158952   30723 main.go:141] libmachine: Making call to close driver server
	I0815 00:21:25.158969   30723 main.go:141] libmachine: (ha-863044) Calling .Close
	I0815 00:21:25.159178   30723 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:21:25.159193   30723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:21:25.159204   30723 main.go:141] libmachine: (ha-863044) DBG | Closing plugin on server side
	I0815 00:21:25.352704   30723 main.go:141] libmachine: Making call to close driver server
	I0815 00:21:25.352732   30723 main.go:141] libmachine: (ha-863044) Calling .Close
	I0815 00:21:25.353023   30723 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:21:25.353044   30723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:21:25.353055   30723 main.go:141] libmachine: Making call to close driver server
	I0815 00:21:25.353064   30723 main.go:141] libmachine: (ha-863044) Calling .Close
	I0815 00:21:25.353257   30723 main.go:141] libmachine: Successfully made call to close driver server
	I0815 00:21:25.353270   30723 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 00:21:25.354961   30723 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0815 00:21:25.356150   30723 addons.go:510] duration metric: took 839.496754ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0815 00:21:25.356179   30723 start.go:246] waiting for cluster config update ...
	I0815 00:21:25.356194   30723 start.go:255] writing updated cluster config ...
	I0815 00:21:25.357847   30723 out.go:177] 
	I0815 00:21:25.359883   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:21:25.359959   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:21:25.361619   30723 out.go:177] * Starting "ha-863044-m02" control-plane node in "ha-863044" cluster
	I0815 00:21:25.362824   30723 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:21:25.362844   30723 cache.go:56] Caching tarball of preloaded images
	I0815 00:21:25.362930   30723 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:21:25.362944   30723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:21:25.363037   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:21:25.363202   30723 start.go:360] acquireMachinesLock for ha-863044-m02: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 00:21:25.363249   30723 start.go:364] duration metric: took 25.831µs to acquireMachinesLock for "ha-863044-m02"
	I0815 00:21:25.363275   30723 start.go:93] Provisioning new machine with config: &{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:21:25.363366   30723 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0815 00:21:25.364976   30723 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 00:21:25.365059   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:25.365089   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:25.380676   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0815 00:21:25.381123   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:25.381622   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:25.381646   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:25.381933   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:25.382107   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetMachineName
	I0815 00:21:25.382236   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:25.382380   30723 start.go:159] libmachine.API.Create for "ha-863044" (driver="kvm2")
	I0815 00:21:25.382401   30723 client.go:168] LocalClient.Create starting
	I0815 00:21:25.382441   30723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem
	I0815 00:21:25.382469   30723 main.go:141] libmachine: Decoding PEM data...
	I0815 00:21:25.382482   30723 main.go:141] libmachine: Parsing certificate...
	I0815 00:21:25.382528   30723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem
	I0815 00:21:25.382548   30723 main.go:141] libmachine: Decoding PEM data...
	I0815 00:21:25.382564   30723 main.go:141] libmachine: Parsing certificate...
	I0815 00:21:25.382585   30723 main.go:141] libmachine: Running pre-create checks...
	I0815 00:21:25.382596   30723 main.go:141] libmachine: (ha-863044-m02) Calling .PreCreateCheck
	I0815 00:21:25.382893   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetConfigRaw
	I0815 00:21:25.383289   30723 main.go:141] libmachine: Creating machine...
	I0815 00:21:25.383302   30723 main.go:141] libmachine: (ha-863044-m02) Calling .Create
	I0815 00:21:25.383460   30723 main.go:141] libmachine: (ha-863044-m02) Creating KVM machine...
	I0815 00:21:25.384763   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found existing default KVM network
	I0815 00:21:25.384935   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found existing private KVM network mk-ha-863044
	I0815 00:21:25.385100   30723 main.go:141] libmachine: (ha-863044-m02) Setting up store path in /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02 ...
	I0815 00:21:25.385119   30723 main.go:141] libmachine: (ha-863044-m02) Building disk image from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 00:21:25.385218   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:25.385110   31086 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:21:25.385309   30723 main.go:141] libmachine: (ha-863044-m02) Downloading /home/jenkins/minikube-integration/19443-13088/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 00:21:25.650654   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:25.650540   31086 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa...
	I0815 00:21:25.806017   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:25.805904   31086 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/ha-863044-m02.rawdisk...
	I0815 00:21:25.806049   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Writing magic tar header
	I0815 00:21:25.806070   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Writing SSH key tar header
	I0815 00:21:25.806084   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:25.806051   31086 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02 ...
	I0815 00:21:25.806226   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02
	I0815 00:21:25.806252   30723 main.go:141] libmachine: (ha-863044-m02) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02 (perms=drwx------)
	I0815 00:21:25.806264   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines
	I0815 00:21:25.806280   30723 main.go:141] libmachine: (ha-863044-m02) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines (perms=drwxr-xr-x)
	I0815 00:21:25.806294   30723 main.go:141] libmachine: (ha-863044-m02) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube (perms=drwxr-xr-x)
	I0815 00:21:25.806301   30723 main.go:141] libmachine: (ha-863044-m02) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088 (perms=drwxrwxr-x)
	I0815 00:21:25.806310   30723 main.go:141] libmachine: (ha-863044-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 00:21:25.806319   30723 main.go:141] libmachine: (ha-863044-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 00:21:25.806329   30723 main.go:141] libmachine: (ha-863044-m02) Creating domain...
	I0815 00:21:25.806343   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:21:25.806357   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088
	I0815 00:21:25.806370   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 00:21:25.806381   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home/jenkins
	I0815 00:21:25.806390   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Checking permissions on dir: /home
	I0815 00:21:25.806396   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Skipping /home - not owner
	I0815 00:21:25.807283   30723 main.go:141] libmachine: (ha-863044-m02) define libvirt domain using xml: 
	I0815 00:21:25.807302   30723 main.go:141] libmachine: (ha-863044-m02) <domain type='kvm'>
	I0815 00:21:25.807311   30723 main.go:141] libmachine: (ha-863044-m02)   <name>ha-863044-m02</name>
	I0815 00:21:25.807327   30723 main.go:141] libmachine: (ha-863044-m02)   <memory unit='MiB'>2200</memory>
	I0815 00:21:25.807336   30723 main.go:141] libmachine: (ha-863044-m02)   <vcpu>2</vcpu>
	I0815 00:21:25.807344   30723 main.go:141] libmachine: (ha-863044-m02)   <features>
	I0815 00:21:25.807352   30723 main.go:141] libmachine: (ha-863044-m02)     <acpi/>
	I0815 00:21:25.807366   30723 main.go:141] libmachine: (ha-863044-m02)     <apic/>
	I0815 00:21:25.807378   30723 main.go:141] libmachine: (ha-863044-m02)     <pae/>
	I0815 00:21:25.807401   30723 main.go:141] libmachine: (ha-863044-m02)     
	I0815 00:21:25.807413   30723 main.go:141] libmachine: (ha-863044-m02)   </features>
	I0815 00:21:25.807423   30723 main.go:141] libmachine: (ha-863044-m02)   <cpu mode='host-passthrough'>
	I0815 00:21:25.807431   30723 main.go:141] libmachine: (ha-863044-m02)   
	I0815 00:21:25.807442   30723 main.go:141] libmachine: (ha-863044-m02)   </cpu>
	I0815 00:21:25.807450   30723 main.go:141] libmachine: (ha-863044-m02)   <os>
	I0815 00:21:25.807461   30723 main.go:141] libmachine: (ha-863044-m02)     <type>hvm</type>
	I0815 00:21:25.807471   30723 main.go:141] libmachine: (ha-863044-m02)     <boot dev='cdrom'/>
	I0815 00:21:25.807481   30723 main.go:141] libmachine: (ha-863044-m02)     <boot dev='hd'/>
	I0815 00:21:25.807491   30723 main.go:141] libmachine: (ha-863044-m02)     <bootmenu enable='no'/>
	I0815 00:21:25.807525   30723 main.go:141] libmachine: (ha-863044-m02)   </os>
	I0815 00:21:25.807551   30723 main.go:141] libmachine: (ha-863044-m02)   <devices>
	I0815 00:21:25.807569   30723 main.go:141] libmachine: (ha-863044-m02)     <disk type='file' device='cdrom'>
	I0815 00:21:25.807589   30723 main.go:141] libmachine: (ha-863044-m02)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/boot2docker.iso'/>
	I0815 00:21:25.807602   30723 main.go:141] libmachine: (ha-863044-m02)       <target dev='hdc' bus='scsi'/>
	I0815 00:21:25.807610   30723 main.go:141] libmachine: (ha-863044-m02)       <readonly/>
	I0815 00:21:25.807619   30723 main.go:141] libmachine: (ha-863044-m02)     </disk>
	I0815 00:21:25.807631   30723 main.go:141] libmachine: (ha-863044-m02)     <disk type='file' device='disk'>
	I0815 00:21:25.807646   30723 main.go:141] libmachine: (ha-863044-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 00:21:25.807661   30723 main.go:141] libmachine: (ha-863044-m02)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/ha-863044-m02.rawdisk'/>
	I0815 00:21:25.807674   30723 main.go:141] libmachine: (ha-863044-m02)       <target dev='hda' bus='virtio'/>
	I0815 00:21:25.807688   30723 main.go:141] libmachine: (ha-863044-m02)     </disk>
	I0815 00:21:25.807700   30723 main.go:141] libmachine: (ha-863044-m02)     <interface type='network'>
	I0815 00:21:25.807726   30723 main.go:141] libmachine: (ha-863044-m02)       <source network='mk-ha-863044'/>
	I0815 00:21:25.807747   30723 main.go:141] libmachine: (ha-863044-m02)       <model type='virtio'/>
	I0815 00:21:25.807762   30723 main.go:141] libmachine: (ha-863044-m02)     </interface>
	I0815 00:21:25.807781   30723 main.go:141] libmachine: (ha-863044-m02)     <interface type='network'>
	I0815 00:21:25.807795   30723 main.go:141] libmachine: (ha-863044-m02)       <source network='default'/>
	I0815 00:21:25.807806   30723 main.go:141] libmachine: (ha-863044-m02)       <model type='virtio'/>
	I0815 00:21:25.807815   30723 main.go:141] libmachine: (ha-863044-m02)     </interface>
	I0815 00:21:25.807830   30723 main.go:141] libmachine: (ha-863044-m02)     <serial type='pty'>
	I0815 00:21:25.807841   30723 main.go:141] libmachine: (ha-863044-m02)       <target port='0'/>
	I0815 00:21:25.807851   30723 main.go:141] libmachine: (ha-863044-m02)     </serial>
	I0815 00:21:25.807860   30723 main.go:141] libmachine: (ha-863044-m02)     <console type='pty'>
	I0815 00:21:25.807870   30723 main.go:141] libmachine: (ha-863044-m02)       <target type='serial' port='0'/>
	I0815 00:21:25.807879   30723 main.go:141] libmachine: (ha-863044-m02)     </console>
	I0815 00:21:25.807884   30723 main.go:141] libmachine: (ha-863044-m02)     <rng model='virtio'>
	I0815 00:21:25.807898   30723 main.go:141] libmachine: (ha-863044-m02)       <backend model='random'>/dev/random</backend>
	I0815 00:21:25.807912   30723 main.go:141] libmachine: (ha-863044-m02)     </rng>
	I0815 00:21:25.807927   30723 main.go:141] libmachine: (ha-863044-m02)     
	I0815 00:21:25.807941   30723 main.go:141] libmachine: (ha-863044-m02)     
	I0815 00:21:25.807954   30723 main.go:141] libmachine: (ha-863044-m02)   </devices>
	I0815 00:21:25.807965   30723 main.go:141] libmachine: (ha-863044-m02) </domain>
	I0815 00:21:25.807980   30723 main.go:141] libmachine: (ha-863044-m02) 
	I0815 00:21:25.814743   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:5a:e2:de in network default
	I0815 00:21:25.815224   30723 main.go:141] libmachine: (ha-863044-m02) Ensuring networks are active...
	I0815 00:21:25.815240   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:25.815967   30723 main.go:141] libmachine: (ha-863044-m02) Ensuring network default is active
	I0815 00:21:25.816265   30723 main.go:141] libmachine: (ha-863044-m02) Ensuring network mk-ha-863044 is active
	I0815 00:21:25.816696   30723 main.go:141] libmachine: (ha-863044-m02) Getting domain xml...
	I0815 00:21:25.817316   30723 main.go:141] libmachine: (ha-863044-m02) Creating domain...
	I0815 00:21:27.102595   30723 main.go:141] libmachine: (ha-863044-m02) Waiting to get IP...
	I0815 00:21:27.103754   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:27.104274   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:27.104329   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:27.104257   31086 retry.go:31] will retry after 249.806387ms: waiting for machine to come up
	I0815 00:21:27.356115   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:27.356670   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:27.356700   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:27.356604   31086 retry.go:31] will retry after 272.897696ms: waiting for machine to come up
	I0815 00:21:27.630829   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:27.631362   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:27.631388   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:27.631302   31086 retry.go:31] will retry after 423.643372ms: waiting for machine to come up
	I0815 00:21:28.056689   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:28.057185   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:28.057214   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:28.057141   31086 retry.go:31] will retry after 429.885873ms: waiting for machine to come up
	I0815 00:21:28.488749   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:28.489187   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:28.489213   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:28.489151   31086 retry.go:31] will retry after 564.842329ms: waiting for machine to come up
	I0815 00:21:29.055916   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:29.056538   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:29.056573   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:29.056419   31086 retry.go:31] will retry after 952.116011ms: waiting for machine to come up
	I0815 00:21:30.009650   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:30.010110   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:30.010136   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:30.010074   31086 retry.go:31] will retry after 1.163406803s: waiting for machine to come up
	I0815 00:21:31.175551   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:31.175942   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:31.175969   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:31.175901   31086 retry.go:31] will retry after 1.339715785s: waiting for machine to come up
	I0815 00:21:32.517344   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:32.517754   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:32.517784   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:32.517702   31086 retry.go:31] will retry after 1.542004388s: waiting for machine to come up
	I0815 00:21:34.061553   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:34.061997   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:34.062033   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:34.061936   31086 retry.go:31] will retry after 1.693143598s: waiting for machine to come up
	I0815 00:21:35.756552   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:35.756971   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:35.756997   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:35.756920   31086 retry.go:31] will retry after 2.225684381s: waiting for machine to come up
	I0815 00:21:37.985128   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:37.985577   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:37.985616   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:37.985542   31086 retry.go:31] will retry after 3.575835042s: waiting for machine to come up
	I0815 00:21:41.563129   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:41.563608   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find current IP address of domain ha-863044-m02 in network mk-ha-863044
	I0815 00:21:41.563645   30723 main.go:141] libmachine: (ha-863044-m02) DBG | I0815 00:21:41.563567   31086 retry.go:31] will retry after 4.387259926s: waiting for machine to come up
	I0815 00:21:45.951832   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:45.952383   30723 main.go:141] libmachine: (ha-863044-m02) Found IP for machine: 192.168.39.170
	I0815 00:21:45.952413   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has current primary IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:45.952422   30723 main.go:141] libmachine: (ha-863044-m02) Reserving static IP address...
	I0815 00:21:45.953020   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find host DHCP lease matching {name: "ha-863044-m02", mac: "52:54:00:4e:19:c9", ip: "192.168.39.170"} in network mk-ha-863044
	I0815 00:21:46.024826   30723 main.go:141] libmachine: (ha-863044-m02) Reserved static IP address: 192.168.39.170
	I0815 00:21:46.024848   30723 main.go:141] libmachine: (ha-863044-m02) Waiting for SSH to be available...
	I0815 00:21:46.024861   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Getting to WaitForSSH function...
	I0815 00:21:46.027685   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:46.027990   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044
	I0815 00:21:46.028015   30723 main.go:141] libmachine: (ha-863044-m02) DBG | unable to find defined IP address of network mk-ha-863044 interface with MAC address 52:54:00:4e:19:c9
	I0815 00:21:46.028152   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Using SSH client type: external
	I0815 00:21:46.028178   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa (-rw-------)
	I0815 00:21:46.028207   30723 main.go:141] libmachine: (ha-863044-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 00:21:46.028220   30723 main.go:141] libmachine: (ha-863044-m02) DBG | About to run SSH command:
	I0815 00:21:46.028239   30723 main.go:141] libmachine: (ha-863044-m02) DBG | exit 0
	I0815 00:21:46.031878   30723 main.go:141] libmachine: (ha-863044-m02) DBG | SSH cmd err, output: exit status 255: 
	I0815 00:21:46.031898   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0815 00:21:46.031904   30723 main.go:141] libmachine: (ha-863044-m02) DBG | command : exit 0
	I0815 00:21:46.031910   30723 main.go:141] libmachine: (ha-863044-m02) DBG | err     : exit status 255
	I0815 00:21:46.031934   30723 main.go:141] libmachine: (ha-863044-m02) DBG | output  : 
	I0815 00:21:49.033998   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Getting to WaitForSSH function...
	I0815 00:21:49.036538   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.036885   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.036912   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.036973   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Using SSH client type: external
	I0815 00:21:49.037071   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa (-rw-------)
	I0815 00:21:49.037108   30723 main.go:141] libmachine: (ha-863044-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 00:21:49.037122   30723 main.go:141] libmachine: (ha-863044-m02) DBG | About to run SSH command:
	I0815 00:21:49.037136   30723 main.go:141] libmachine: (ha-863044-m02) DBG | exit 0
	I0815 00:21:49.160317   30723 main.go:141] libmachine: (ha-863044-m02) DBG | SSH cmd err, output: <nil>: 
	I0815 00:21:49.160617   30723 main.go:141] libmachine: (ha-863044-m02) KVM machine creation complete!
	I0815 00:21:49.160936   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetConfigRaw
	I0815 00:21:49.161565   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:49.161757   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:49.161925   30723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 00:21:49.161957   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetState
	I0815 00:21:49.163197   30723 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 00:21:49.163209   30723 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 00:21:49.163219   30723 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 00:21:49.163225   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.165390   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.165748   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.165772   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.165893   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:49.166042   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.166183   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.166294   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:49.166448   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:49.166692   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0815 00:21:49.166706   30723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 00:21:49.263652   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:21:49.263679   30723 main.go:141] libmachine: Detecting the provisioner...
	I0815 00:21:49.263691   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.266383   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.266754   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.266782   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.266936   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:49.267119   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.267264   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.267429   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:49.267590   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:49.267753   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0815 00:21:49.267764   30723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 00:21:49.368752   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 00:21:49.368818   30723 main.go:141] libmachine: found compatible host: buildroot
	I0815 00:21:49.368827   30723 main.go:141] libmachine: Provisioning with buildroot...
	I0815 00:21:49.368837   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetMachineName
	I0815 00:21:49.369052   30723 buildroot.go:166] provisioning hostname "ha-863044-m02"
	I0815 00:21:49.369074   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetMachineName
	I0815 00:21:49.369236   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.371734   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.372061   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.372085   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.372221   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:49.372404   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.372539   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.372672   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:49.372814   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:49.372996   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0815 00:21:49.373009   30723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863044-m02 && echo "ha-863044-m02" | sudo tee /etc/hostname
	I0815 00:21:49.485265   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863044-m02
	
	I0815 00:21:49.485298   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.487683   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.488034   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.488062   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.488238   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:49.488422   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.488583   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.488740   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:49.488896   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:49.489094   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0815 00:21:49.489113   30723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863044-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863044-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863044-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:21:49.596979   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:21:49.597004   30723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 00:21:49.597037   30723 buildroot.go:174] setting up certificates
	I0815 00:21:49.597047   30723 provision.go:84] configureAuth start
	I0815 00:21:49.597061   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetMachineName
	I0815 00:21:49.597333   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:21:49.599655   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.599967   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.599992   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.600116   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.601985   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.602314   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.602340   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.602470   30723 provision.go:143] copyHostCerts
	I0815 00:21:49.602512   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:21:49.602544   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 00:21:49.602552   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:21:49.602618   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 00:21:49.602707   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:21:49.602725   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 00:21:49.602729   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:21:49.602753   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 00:21:49.602794   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:21:49.602811   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 00:21:49.602817   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:21:49.602839   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 00:21:49.602884   30723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.ha-863044-m02 san=[127.0.0.1 192.168.39.170 ha-863044-m02 localhost minikube]
	I0815 00:21:49.779877   30723 provision.go:177] copyRemoteCerts
	I0815 00:21:49.779934   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:21:49.779970   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.782304   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.782598   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.782627   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.782861   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:49.783064   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.783190   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:49.783323   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	I0815 00:21:49.861771   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 00:21:49.861843   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:21:49.888019   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 00:21:49.888091   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 00:21:49.910750   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 00:21:49.910825   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 00:21:49.932521   30723 provision.go:87] duration metric: took 335.457393ms to configureAuth
	I0815 00:21:49.932555   30723 buildroot.go:189] setting minikube options for container-runtime
	I0815 00:21:49.932790   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:21:49.932903   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:49.935628   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.936015   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:49.936046   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:49.936200   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:49.936403   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.936583   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:49.936753   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:49.936914   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:49.937086   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0815 00:21:49.937106   30723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:21:50.205561   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:21:50.205586   30723 main.go:141] libmachine: Checking connection to Docker...
	I0815 00:21:50.205596   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetURL
	I0815 00:21:50.206889   30723 main.go:141] libmachine: (ha-863044-m02) DBG | Using libvirt version 6000000
	I0815 00:21:50.208898   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.209228   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.209259   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.209398   30723 main.go:141] libmachine: Docker is up and running!
	I0815 00:21:50.209411   30723 main.go:141] libmachine: Reticulating splines...
	I0815 00:21:50.209417   30723 client.go:171] duration metric: took 24.827007326s to LocalClient.Create
	I0815 00:21:50.209439   30723 start.go:167] duration metric: took 24.827058894s to libmachine.API.Create "ha-863044"
	I0815 00:21:50.209448   30723 start.go:293] postStartSetup for "ha-863044-m02" (driver="kvm2")
	I0815 00:21:50.209457   30723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:21:50.209477   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:50.209698   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:21:50.209717   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:50.211828   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.212089   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.212110   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.212311   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:50.212484   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:50.212674   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:50.212798   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	I0815 00:21:50.290097   30723 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:21:50.293623   30723 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 00:21:50.293643   30723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 00:21:50.293698   30723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 00:21:50.293765   30723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 00:21:50.293774   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /etc/ssl/certs/202792.pem
	I0815 00:21:50.293852   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 00:21:50.302156   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:21:50.323245   30723 start.go:296] duration metric: took 113.784495ms for postStartSetup
	I0815 00:21:50.323298   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetConfigRaw
	I0815 00:21:50.323809   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:21:50.326686   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.327080   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.327114   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.327346   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:21:50.327522   30723 start.go:128] duration metric: took 24.964146227s to createHost
	I0815 00:21:50.327589   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:50.329748   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.330035   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.330062   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.330157   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:50.330327   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:50.330475   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:50.330594   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:50.330773   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:21:50.330964   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0815 00:21:50.330974   30723 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 00:21:50.428976   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723681310.408092904
	
	I0815 00:21:50.429000   30723 fix.go:216] guest clock: 1723681310.408092904
	I0815 00:21:50.429009   30723 fix.go:229] Guest: 2024-08-15 00:21:50.408092904 +0000 UTC Remote: 2024-08-15 00:21:50.327531716 +0000 UTC m=+72.479681123 (delta=80.561188ms)
	I0815 00:21:50.429027   30723 fix.go:200] guest clock delta is within tolerance: 80.561188ms
	I0815 00:21:50.429032   30723 start.go:83] releasing machines lock for "ha-863044-m02", held for 25.06576938s
	I0815 00:21:50.429051   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:50.429294   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:21:50.431823   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.432221   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.432266   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.433808   30723 out.go:177] * Found network options:
	I0815 00:21:50.435079   30723 out.go:177]   - NO_PROXY=192.168.39.6
	W0815 00:21:50.436335   30723 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 00:21:50.436363   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:50.436877   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:50.437062   30723 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:21:50.437163   30723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:21:50.437197   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	W0815 00:21:50.437222   30723 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 00:21:50.437303   30723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:21:50.437326   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:21:50.439994   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.440018   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.440367   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.440404   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:50.440426   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.440440   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:50.440598   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:50.440702   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:21:50.440759   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:50.440824   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:21:50.440885   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:50.440932   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:21:50.440984   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	I0815 00:21:50.441025   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	I0815 00:21:50.661475   30723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 00:21:50.667943   30723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 00:21:50.667998   30723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:21:50.682256   30723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 00:21:50.682273   30723 start.go:495] detecting cgroup driver to use...
	I0815 00:21:50.682338   30723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:21:50.699500   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:21:50.714377   30723 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:21:50.714440   30723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:21:50.727274   30723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:21:50.739883   30723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:21:50.865517   30723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:21:51.003747   30723 docker.go:233] disabling docker service ...
	I0815 00:21:51.003820   30723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:21:51.017352   30723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:21:51.029133   30723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:21:51.154451   30723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:21:51.288112   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:21:51.301260   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:21:51.318378   30723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:21:51.318455   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.328767   30723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:21:51.328833   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.338383   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.347603   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.356884   30723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:21:51.366397   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.375473   30723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.390631   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:21:51.400012   30723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:21:51.408511   30723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 00:21:51.408566   30723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 00:21:51.420541   30723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:21:51.429688   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:21:51.547869   30723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:21:51.678328   30723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:21:51.678409   30723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:21:51.683203   30723 start.go:563] Will wait 60s for crictl version
	I0815 00:21:51.683252   30723 ssh_runner.go:195] Run: which crictl
	I0815 00:21:51.686421   30723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:21:51.723286   30723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 00:21:51.723366   30723 ssh_runner.go:195] Run: crio --version
	I0815 00:21:51.750523   30723 ssh_runner.go:195] Run: crio --version
	I0815 00:21:51.779239   30723 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 00:21:51.780623   30723 out.go:177]   - env NO_PROXY=192.168.39.6
	I0815 00:21:51.781870   30723 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:21:51.784550   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:51.784942   30723 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:21:39 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:21:51.784961   30723 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:21:51.785205   30723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 00:21:51.789029   30723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:21:51.800154   30723 mustload.go:65] Loading cluster: ha-863044
	I0815 00:21:51.800379   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:21:51.800761   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:51.800805   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:51.815216   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0815 00:21:51.815597   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:51.816063   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:51.816078   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:51.816341   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:51.816569   30723 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:21:51.818064   30723 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:21:51.818350   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:51.818387   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:51.832329   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0815 00:21:51.832783   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:51.833215   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:51.833235   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:51.833491   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:51.833636   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:51.833803   30723 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044 for IP: 192.168.39.170
	I0815 00:21:51.833815   30723 certs.go:194] generating shared ca certs ...
	I0815 00:21:51.833831   30723 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:51.833956   30723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 00:21:51.833992   30723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 00:21:51.834001   30723 certs.go:256] generating profile certs ...
	I0815 00:21:51.834064   30723 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key
	I0815 00:21:51.834087   30723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.e124014b
	I0815 00:21:51.834100   30723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.e124014b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.170 192.168.39.254]
	I0815 00:21:52.092271   30723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.e124014b ...
	I0815 00:21:52.092297   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.e124014b: {Name:mk8be6d74c43afd827f181e50df7652f38161e5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:52.092463   30723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.e124014b ...
	I0815 00:21:52.092476   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.e124014b: {Name:mk511d913c107fd588a9cf8a0c3a2ef42984fd3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:21:52.092542   30723 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.e124014b -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt
	I0815 00:21:52.092700   30723 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.e124014b -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key
	I0815 00:21:52.092850   30723 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key
	I0815 00:21:52.092865   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 00:21:52.092880   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 00:21:52.092893   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 00:21:52.092905   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 00:21:52.092918   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 00:21:52.092930   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 00:21:52.092943   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 00:21:52.092955   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 00:21:52.093002   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 00:21:52.093029   30723 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 00:21:52.093038   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:21:52.093059   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:21:52.093080   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:21:52.093100   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 00:21:52.093135   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:21:52.093160   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem -> /usr/share/ca-certificates/20279.pem
	I0815 00:21:52.093173   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /usr/share/ca-certificates/202792.pem
	I0815 00:21:52.093185   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:52.093213   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:52.096735   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:52.097221   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:52.097241   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:52.097446   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:52.097649   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:52.097794   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:52.097962   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:52.181040   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0815 00:21:52.185719   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 00:21:52.196184   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0815 00:21:52.199804   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0815 00:21:52.209520   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 00:21:52.213244   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 00:21:52.224011   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0815 00:21:52.227492   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 00:21:52.237306   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0815 00:21:52.240797   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 00:21:52.250198   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0815 00:21:52.253751   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0815 00:21:52.263515   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:21:52.287634   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:21:52.309806   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:21:52.331532   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:21:52.353311   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0815 00:21:52.375376   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 00:21:52.400179   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:21:52.421867   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 00:21:52.443162   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 00:21:52.464906   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 00:21:52.486390   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:21:52.507486   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 00:21:52.522468   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0815 00:21:52.537690   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 00:21:52.553421   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 00:21:52.568859   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 00:21:52.584224   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0815 00:21:52.599035   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 00:21:52.613930   30723 ssh_runner.go:195] Run: openssl version
	I0815 00:21:52.619258   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 00:21:52.628625   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 00:21:52.632994   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 00:21:52.633044   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 00:21:52.638788   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 00:21:52.649038   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:21:52.659230   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:52.663224   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:52.663272   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:21:52.668363   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:21:52.677457   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 00:21:52.686687   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 00:21:52.690555   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 00:21:52.690605   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 00:21:52.695555   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 00:21:52.704856   30723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:21:52.708314   30723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:21:52.708361   30723 kubeadm.go:934] updating node {m02 192.168.39.170 8443 v1.31.0 crio true true} ...
	I0815 00:21:52.708439   30723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863044-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:21:52.708469   30723 kube-vip.go:115] generating kube-vip config ...
	I0815 00:21:52.708507   30723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 00:21:52.724921   30723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 00:21:52.724980   30723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 00:21:52.725035   30723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:21:52.733943   30723 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0815 00:21:52.733999   30723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0815 00:21:52.742668   30723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0815 00:21:52.742694   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 00:21:52.742736   30723 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0815 00:21:52.742766   30723 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0815 00:21:52.742767   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 00:21:52.746971   30723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0815 00:21:52.746991   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0815 00:21:54.975701   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:21:54.989491   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 00:21:54.989597   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 00:21:54.993221   30723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0815 00:21:54.993246   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0815 00:21:55.520848   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 00:21:55.520956   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 00:21:55.525966   30723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0815 00:21:55.526000   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0815 00:21:55.739980   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 00:21:55.748562   30723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0815 00:21:55.763555   30723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:21:55.778081   30723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 00:21:55.793097   30723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 00:21:55.796583   30723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:21:55.807629   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:21:55.938533   30723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:21:55.955576   30723 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:21:55.956016   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:21:55.956068   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:21:55.970773   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0815 00:21:55.971258   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:21:55.971792   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:21:55.971813   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:21:55.972206   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:21:55.972382   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:21:55.972568   30723 start.go:317] joinCluster: &{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:21:55.972702   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0815 00:21:55.972727   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:21:55.975640   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:55.976046   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:21:55.976074   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:21:55.976206   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:21:55.976378   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:21:55.976527   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:21:55.976696   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:21:56.132045   30723 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:21:56.132103   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f85zt8.dk03u657aanxbkpc --discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863044-m02 --control-plane --apiserver-advertise-address=192.168.39.170 --apiserver-bind-port=8443"
	I0815 00:22:17.902402   30723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f85zt8.dk03u657aanxbkpc --discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863044-m02 --control-plane --apiserver-advertise-address=192.168.39.170 --apiserver-bind-port=8443": (21.770273412s)
	I0815 00:22:17.902495   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0815 00:22:18.486275   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-863044-m02 minikube.k8s.io/updated_at=2024_08_15T00_22_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=ha-863044 minikube.k8s.io/primary=false
	I0815 00:22:18.625669   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-863044-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0815 00:22:18.771489   30723 start.go:319] duration metric: took 22.798918544s to joinCluster
	I0815 00:22:18.771602   30723 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:22:18.771919   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:22:18.773284   30723 out.go:177] * Verifying Kubernetes components...
	I0815 00:22:18.774595   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:22:18.998202   30723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:22:19.012004   30723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:22:19.012223   30723 kapi.go:59] client config for ha-863044: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.crt", KeyFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key", CAFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 00:22:19.012272   30723 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.6:8443
	I0815 00:22:19.012501   30723 node_ready.go:35] waiting up to 6m0s for node "ha-863044-m02" to be "Ready" ...
	I0815 00:22:19.012587   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:19.012596   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:19.012603   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:19.012607   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:19.038987   30723 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0815 00:22:19.512830   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:19.512846   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:19.512857   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:19.512863   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:19.516445   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:20.013359   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:20.013381   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:20.013392   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:20.013401   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:20.017754   30723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 00:22:20.513504   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:20.513532   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:20.513543   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:20.513550   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:20.516750   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:21.013595   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:21.013619   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:21.013628   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:21.013631   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:21.016614   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:21.017204   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:21.513565   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:21.513594   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:21.513603   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:21.513607   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:21.516521   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:22.013091   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:22.013111   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:22.013120   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:22.013123   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:22.016446   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:22.513547   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:22.513574   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:22.513585   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:22.513592   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:22.516694   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:23.013216   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:23.013243   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:23.013254   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:23.013259   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:23.023121   30723 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0815 00:22:23.023774   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:23.512826   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:23.512849   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:23.512859   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:23.512864   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:23.515760   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:24.012704   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:24.012724   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:24.012732   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:24.012735   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:24.016299   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:24.513521   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:24.513544   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:24.513563   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:24.513569   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:24.517034   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:25.012863   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:25.012885   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:25.012896   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:25.012901   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:25.140822   30723 round_trippers.go:574] Response Status: 200 OK in 127 milliseconds
	I0815 00:22:25.141378   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:25.513650   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:25.513676   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:25.513686   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:25.513692   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:25.516868   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:26.012996   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:26.013015   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:26.013026   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:26.013036   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:26.025110   30723 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0815 00:22:26.512830   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:26.512851   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:26.512865   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:26.512869   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:26.516139   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:27.013040   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:27.013062   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:27.013074   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:27.013079   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:27.016495   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:27.513481   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:27.513504   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:27.513513   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:27.513520   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:27.516356   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:27.517133   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:28.013289   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:28.013318   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:28.013326   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:28.013330   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:28.016534   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:28.513573   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:28.513594   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:28.513602   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:28.513607   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:28.516770   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:29.012800   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:29.012822   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:29.012830   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:29.012833   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:29.016035   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:29.512918   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:29.512940   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:29.512947   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:29.512952   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:29.516290   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:30.013327   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:30.013351   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:30.013358   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:30.013362   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:30.016360   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:30.016850   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:30.513706   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:30.513726   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:30.513734   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:30.513739   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:30.516585   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:31.013105   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:31.013125   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:31.013133   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:31.013137   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:31.016090   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:31.512809   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:31.512841   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:31.512849   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:31.512852   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:31.515972   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:32.012770   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:32.012790   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:32.012798   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:32.012802   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:32.015906   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:32.512695   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:32.512716   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:32.512725   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:32.512728   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:32.515632   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:32.516406   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:33.013512   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:33.013533   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:33.013546   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:33.013550   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:33.016320   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:33.513289   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:33.513309   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:33.513316   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:33.513320   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:33.516207   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:34.013139   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:34.013161   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:34.013169   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:34.013172   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:34.016179   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:34.512839   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:34.512865   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:34.512876   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:34.512882   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:34.515453   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:35.012712   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:35.012736   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:35.012748   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:35.012754   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:35.022959   30723 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0815 00:22:35.023356   30723 node_ready.go:53] node "ha-863044-m02" has status "Ready":"False"
	I0815 00:22:35.513191   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:35.513214   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:35.513225   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:35.513230   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:35.516137   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:36.013509   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:36.013530   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:36.013538   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:36.013541   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:36.016798   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:36.512836   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:36.512862   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:36.512872   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:36.512878   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:36.516281   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:37.013011   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:37.013031   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.013039   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.013042   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.016590   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:37.017079   30723 node_ready.go:49] node "ha-863044-m02" has status "Ready":"True"
	I0815 00:22:37.017096   30723 node_ready.go:38] duration metric: took 18.004580218s for node "ha-863044-m02" to be "Ready" ...
	I0815 00:22:37.017113   30723 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:22:37.017173   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:22:37.017181   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.017190   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.017194   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.021592   30723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 00:22:37.027616   30723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-bc2jh" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.027697   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-bc2jh
	I0815 00:22:37.027707   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.027713   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.027722   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.030221   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.030983   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:37.030994   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.031001   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.031004   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.033177   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.033629   30723 pod_ready.go:92] pod "coredns-6f6b679f8f-bc2jh" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:37.033649   30723 pod_ready.go:81] duration metric: took 6.01329ms for pod "coredns-6f6b679f8f-bc2jh" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.033657   30723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-jxpqd" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.033699   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-jxpqd
	I0815 00:22:37.033706   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.033712   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.033715   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.036052   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.036832   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:37.036845   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.036852   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.036855   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.038842   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:22:37.039438   30723 pod_ready.go:92] pod "coredns-6f6b679f8f-jxpqd" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:37.039453   30723 pod_ready.go:81] duration metric: took 5.791539ms for pod "coredns-6f6b679f8f-jxpqd" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.039461   30723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.039501   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863044
	I0815 00:22:37.039509   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.039515   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.039519   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.041705   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.042407   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:37.042419   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.042426   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.042430   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.044326   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:22:37.044772   30723 pod_ready.go:92] pod "etcd-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:37.044785   30723 pod_ready.go:81] duration metric: took 5.319056ms for pod "etcd-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.044793   30723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.044829   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863044-m02
	I0815 00:22:37.044836   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.044843   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.044847   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.046831   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:22:37.047403   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:37.047415   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.047421   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.047424   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.049788   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.050423   30723 pod_ready.go:92] pod "etcd-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:37.050441   30723 pod_ready.go:81] duration metric: took 5.642321ms for pod "etcd-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.050458   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.213835   30723 request.go:632] Waited for 163.317682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044
	I0815 00:22:37.213904   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044
	I0815 00:22:37.213909   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.213917   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.213923   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.216844   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.413793   30723 request.go:632] Waited for 196.360496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:37.413861   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:37.413869   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.413880   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.413886   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.416825   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.417435   30723 pod_ready.go:92] pod "kube-apiserver-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:37.417453   30723 pod_ready.go:81] duration metric: took 366.985345ms for pod "kube-apiserver-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.417463   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.613560   30723 request.go:632] Waited for 196.017014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044-m02
	I0815 00:22:37.613619   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044-m02
	I0815 00:22:37.613627   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.613635   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.613644   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.616818   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:37.813823   30723 request.go:632] Waited for 196.341076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:37.813879   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:37.813885   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:37.813892   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:37.813895   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:37.816850   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:37.817577   30723 pod_ready.go:92] pod "kube-apiserver-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:37.817593   30723 pod_ready.go:81] duration metric: took 400.124302ms for pod "kube-apiserver-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:37.817602   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:38.013401   30723 request.go:632] Waited for 195.726582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044
	I0815 00:22:38.013473   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044
	I0815 00:22:38.013478   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:38.013485   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:38.013489   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:38.016577   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:38.213581   30723 request.go:632] Waited for 196.359714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:38.213654   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:38.213659   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:38.213668   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:38.213672   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:38.216766   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:38.217137   30723 pod_ready.go:92] pod "kube-controller-manager-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:38.217155   30723 pod_ready.go:81] duration metric: took 399.546691ms for pod "kube-controller-manager-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:38.217163   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:38.413330   30723 request.go:632] Waited for 196.094896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044-m02
	I0815 00:22:38.413389   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044-m02
	I0815 00:22:38.413395   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:38.413402   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:38.413407   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:38.416538   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:38.613841   30723 request.go:632] Waited for 196.434899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:38.613918   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:38.613927   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:38.613935   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:38.613941   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:38.617214   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:38.617747   30723 pod_ready.go:92] pod "kube-controller-manager-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:38.617773   30723 pod_ready.go:81] duration metric: took 400.603334ms for pod "kube-controller-manager-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:38.617789   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6l4gp" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:38.813842   30723 request.go:632] Waited for 195.963426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6l4gp
	I0815 00:22:38.813893   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6l4gp
	I0815 00:22:38.813899   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:38.813906   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:38.813911   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:38.816702   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:39.013619   30723 request.go:632] Waited for 196.34729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:39.013706   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:39.013714   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:39.013722   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:39.013726   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:39.016543   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:39.017139   30723 pod_ready.go:92] pod "kube-proxy-6l4gp" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:39.017157   30723 pod_ready.go:81] duration metric: took 399.360176ms for pod "kube-proxy-6l4gp" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:39.017169   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-758vr" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:39.213268   30723 request.go:632] Waited for 196.035432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-758vr
	I0815 00:22:39.213347   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-758vr
	I0815 00:22:39.213354   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:39.213361   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:39.213364   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:39.216285   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:39.413361   30723 request.go:632] Waited for 196.348438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:39.413427   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:39.413434   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:39.413444   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:39.413453   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:39.416456   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:22:39.417033   30723 pod_ready.go:92] pod "kube-proxy-758vr" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:39.417051   30723 pod_ready.go:81] duration metric: took 399.876068ms for pod "kube-proxy-758vr" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:39.417060   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:39.613052   30723 request.go:632] Waited for 195.936806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044
	I0815 00:22:39.613116   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044
	I0815 00:22:39.613123   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:39.613133   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:39.613139   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:39.616328   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:39.813503   30723 request.go:632] Waited for 196.344352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:39.813571   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:22:39.813576   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:39.813584   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:39.813591   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:39.816987   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:39.817641   30723 pod_ready.go:92] pod "kube-scheduler-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:39.817664   30723 pod_ready.go:81] duration metric: took 400.594569ms for pod "kube-scheduler-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:39.817676   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:40.013706   30723 request.go:632] Waited for 195.955688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044-m02
	I0815 00:22:40.013765   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044-m02
	I0815 00:22:40.013770   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:40.013778   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:40.013781   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:40.016871   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:40.213637   30723 request.go:632] Waited for 196.191598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:40.213709   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:22:40.213719   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:40.213728   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:40.213734   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:40.217048   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:40.217846   30723 pod_ready.go:92] pod "kube-scheduler-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:22:40.217866   30723 pod_ready.go:81] duration metric: took 400.177976ms for pod "kube-scheduler-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:22:40.217880   30723 pod_ready.go:38] duration metric: took 3.200753657s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:22:40.217898   30723 api_server.go:52] waiting for apiserver process to appear ...
	I0815 00:22:40.217952   30723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:22:40.233279   30723 api_server.go:72] duration metric: took 21.461634198s to wait for apiserver process to appear ...
	I0815 00:22:40.233296   30723 api_server.go:88] waiting for apiserver healthz status ...
	I0815 00:22:40.233312   30723 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0815 00:22:40.240396   30723 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0815 00:22:40.240466   30723 round_trippers.go:463] GET https://192.168.39.6:8443/version
	I0815 00:22:40.240476   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:40.240487   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:40.240496   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:40.241592   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:22:40.241712   30723 api_server.go:141] control plane version: v1.31.0
	I0815 00:22:40.241727   30723 api_server.go:131] duration metric: took 8.426075ms to wait for apiserver health ...
	I0815 00:22:40.241735   30723 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 00:22:40.413319   30723 request.go:632] Waited for 171.496588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:22:40.413371   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:22:40.413376   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:40.413383   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:40.413388   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:40.418439   30723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 00:22:40.422523   30723 system_pods.go:59] 17 kube-system pods found
	I0815 00:22:40.422546   30723 system_pods.go:61] "coredns-6f6b679f8f-bc2jh" [77760785-a989-4c45-a8e0-e758db3a252b] Running
	I0815 00:22:40.422551   30723 system_pods.go:61] "coredns-6f6b679f8f-jxpqd" [72e46071-4563-4c8c-a269-c32c4d0fced3] Running
	I0815 00:22:40.422554   30723 system_pods.go:61] "etcd-ha-863044" [e41d94d6-4a69-49a3-93bc-d726a95b08b2] Running
	I0815 00:22:40.422558   30723 system_pods.go:61] "etcd-ha-863044-m02" [1c022b82-287f-493c-89ff-3aa70264c39a] Running
	I0815 00:22:40.422561   30723 system_pods.go:61] "kindnet-ptbpb" [b1fee332-fbc7-4b7b-818a-9ba398dce43e] Running
	I0815 00:22:40.422564   30723 system_pods.go:61] "kindnet-xpnzd" [6cd2a4c8-3c5f-4860-90bb-23a8c6f72a15] Running
	I0815 00:22:40.422567   30723 system_pods.go:61] "kube-apiserver-ha-863044" [52bc4344-75cb-4659-a1df-db580ad5d026] Running
	I0815 00:22:40.422570   30723 system_pods.go:61] "kube-apiserver-ha-863044-m02" [087ef288-843d-44fc-9c5b-1b302f6d2906] Running
	I0815 00:22:40.422573   30723 system_pods.go:61] "kube-controller-manager-ha-863044" [4539aebc-86af-4e9f-8736-348d90f3981d] Running
	I0815 00:22:40.422576   30723 system_pods.go:61] "kube-controller-manager-ha-863044-m02" [a0c27335-3bc0-4a2e-9875-0c736b47a4b1] Running
	I0815 00:22:40.422579   30723 system_pods.go:61] "kube-proxy-6l4gp" [85ddf43f-82b7-4325-a5d8-d4f2242b4e7c] Running
	I0815 00:22:40.422582   30723 system_pods.go:61] "kube-proxy-758vr" [0963208c-92ef-4625-8805-1c8ad8ae7b51] Running
	I0815 00:22:40.422585   30723 system_pods.go:61] "kube-scheduler-ha-863044" [84013745-813a-4eab-a9a5-6edd28301611] Running
	I0815 00:22:40.422587   30723 system_pods.go:61] "kube-scheduler-ha-863044-m02" [62650272-5fa7-4ff2-83b5-6cb6f84d497b] Running
	I0815 00:22:40.422590   30723 system_pods.go:61] "kube-vip-ha-863044" [ff875a81-1ee8-4073-a666-4f9dc4239e38] Running
	I0815 00:22:40.422593   30723 system_pods.go:61] "kube-vip-ha-863044-m02" [e9f868e0-44af-4e2b-8699-a88d1a752594] Running
	I0815 00:22:40.422596   30723 system_pods.go:61] "storage-provisioner" [a7565569-2f8c-4393-b4f8-b8548d65f794] Running
	I0815 00:22:40.422601   30723 system_pods.go:74] duration metric: took 180.861182ms to wait for pod list to return data ...
	I0815 00:22:40.422611   30723 default_sa.go:34] waiting for default service account to be created ...
	I0815 00:22:40.613804   30723 request.go:632] Waited for 191.125258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0815 00:22:40.613855   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0815 00:22:40.613863   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:40.613870   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:40.613876   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:40.617566   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:40.617782   30723 default_sa.go:45] found service account: "default"
	I0815 00:22:40.617795   30723 default_sa.go:55] duration metric: took 195.179763ms for default service account to be created ...
	I0815 00:22:40.617803   30723 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 00:22:40.813165   30723 request.go:632] Waited for 195.287376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:22:40.813212   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:22:40.813218   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:40.813225   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:40.813229   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:40.817620   30723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 00:22:40.821578   30723 system_pods.go:86] 17 kube-system pods found
	I0815 00:22:40.821600   30723 system_pods.go:89] "coredns-6f6b679f8f-bc2jh" [77760785-a989-4c45-a8e0-e758db3a252b] Running
	I0815 00:22:40.821606   30723 system_pods.go:89] "coredns-6f6b679f8f-jxpqd" [72e46071-4563-4c8c-a269-c32c4d0fced3] Running
	I0815 00:22:40.821610   30723 system_pods.go:89] "etcd-ha-863044" [e41d94d6-4a69-49a3-93bc-d726a95b08b2] Running
	I0815 00:22:40.821614   30723 system_pods.go:89] "etcd-ha-863044-m02" [1c022b82-287f-493c-89ff-3aa70264c39a] Running
	I0815 00:22:40.821620   30723 system_pods.go:89] "kindnet-ptbpb" [b1fee332-fbc7-4b7b-818a-9ba398dce43e] Running
	I0815 00:22:40.821624   30723 system_pods.go:89] "kindnet-xpnzd" [6cd2a4c8-3c5f-4860-90bb-23a8c6f72a15] Running
	I0815 00:22:40.821628   30723 system_pods.go:89] "kube-apiserver-ha-863044" [52bc4344-75cb-4659-a1df-db580ad5d026] Running
	I0815 00:22:40.821632   30723 system_pods.go:89] "kube-apiserver-ha-863044-m02" [087ef288-843d-44fc-9c5b-1b302f6d2906] Running
	I0815 00:22:40.821641   30723 system_pods.go:89] "kube-controller-manager-ha-863044" [4539aebc-86af-4e9f-8736-348d90f3981d] Running
	I0815 00:22:40.821645   30723 system_pods.go:89] "kube-controller-manager-ha-863044-m02" [a0c27335-3bc0-4a2e-9875-0c736b47a4b1] Running
	I0815 00:22:40.821651   30723 system_pods.go:89] "kube-proxy-6l4gp" [85ddf43f-82b7-4325-a5d8-d4f2242b4e7c] Running
	I0815 00:22:40.821655   30723 system_pods.go:89] "kube-proxy-758vr" [0963208c-92ef-4625-8805-1c8ad8ae7b51] Running
	I0815 00:22:40.821659   30723 system_pods.go:89] "kube-scheduler-ha-863044" [84013745-813a-4eab-a9a5-6edd28301611] Running
	I0815 00:22:40.821663   30723 system_pods.go:89] "kube-scheduler-ha-863044-m02" [62650272-5fa7-4ff2-83b5-6cb6f84d497b] Running
	I0815 00:22:40.821669   30723 system_pods.go:89] "kube-vip-ha-863044" [ff875a81-1ee8-4073-a666-4f9dc4239e38] Running
	I0815 00:22:40.821673   30723 system_pods.go:89] "kube-vip-ha-863044-m02" [e9f868e0-44af-4e2b-8699-a88d1a752594] Running
	I0815 00:22:40.821677   30723 system_pods.go:89] "storage-provisioner" [a7565569-2f8c-4393-b4f8-b8548d65f794] Running
	I0815 00:22:40.821683   30723 system_pods.go:126] duration metric: took 203.876122ms to wait for k8s-apps to be running ...
	I0815 00:22:40.821692   30723 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 00:22:40.821734   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:22:40.838015   30723 system_svc.go:56] duration metric: took 16.314738ms WaitForService to wait for kubelet
	I0815 00:22:40.838036   30723 kubeadm.go:582] duration metric: took 22.066393295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:22:40.838053   30723 node_conditions.go:102] verifying NodePressure condition ...
	I0815 00:22:41.013823   30723 request.go:632] Waited for 175.704777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes
	I0815 00:22:41.013872   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0815 00:22:41.013877   30723 round_trippers.go:469] Request Headers:
	I0815 00:22:41.013884   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:22:41.013888   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:22:41.017502   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:22:41.018221   30723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 00:22:41.018245   30723 node_conditions.go:123] node cpu capacity is 2
	I0815 00:22:41.018255   30723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 00:22:41.018260   30723 node_conditions.go:123] node cpu capacity is 2
	I0815 00:22:41.018264   30723 node_conditions.go:105] duration metric: took 180.206048ms to run NodePressure ...
	I0815 00:22:41.018274   30723 start.go:241] waiting for startup goroutines ...
	I0815 00:22:41.018297   30723 start.go:255] writing updated cluster config ...
	I0815 00:22:41.020376   30723 out.go:177] 
	I0815 00:22:41.021665   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:22:41.021741   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:22:41.023206   30723 out.go:177] * Starting "ha-863044-m03" control-plane node in "ha-863044" cluster
	I0815 00:22:41.024169   30723 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:22:41.024188   30723 cache.go:56] Caching tarball of preloaded images
	I0815 00:22:41.024275   30723 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:22:41.024285   30723 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:22:41.024365   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:22:41.024511   30723 start.go:360] acquireMachinesLock for ha-863044-m03: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 00:22:41.024548   30723 start.go:364] duration metric: took 19.263µs to acquireMachinesLock for "ha-863044-m03"
	I0815 00:22:41.024562   30723 start.go:93] Provisioning new machine with config: &{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:22:41.024645   30723 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0815 00:22:41.025969   30723 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 00:22:41.026063   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:22:41.026100   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:22:41.040958   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I0815 00:22:41.041364   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:22:41.041802   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:22:41.041820   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:22:41.042132   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:22:41.042294   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetMachineName
	I0815 00:22:41.042405   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:22:41.042529   30723 start.go:159] libmachine.API.Create for "ha-863044" (driver="kvm2")
	I0815 00:22:41.042564   30723 client.go:168] LocalClient.Create starting
	I0815 00:22:41.042606   30723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem
	I0815 00:22:41.042651   30723 main.go:141] libmachine: Decoding PEM data...
	I0815 00:22:41.042672   30723 main.go:141] libmachine: Parsing certificate...
	I0815 00:22:41.042743   30723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem
	I0815 00:22:41.042776   30723 main.go:141] libmachine: Decoding PEM data...
	I0815 00:22:41.042797   30723 main.go:141] libmachine: Parsing certificate...
	I0815 00:22:41.042822   30723 main.go:141] libmachine: Running pre-create checks...
	I0815 00:22:41.042835   30723 main.go:141] libmachine: (ha-863044-m03) Calling .PreCreateCheck
	I0815 00:22:41.042984   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetConfigRaw
	I0815 00:22:41.043375   30723 main.go:141] libmachine: Creating machine...
	I0815 00:22:41.043389   30723 main.go:141] libmachine: (ha-863044-m03) Calling .Create
	I0815 00:22:41.043504   30723 main.go:141] libmachine: (ha-863044-m03) Creating KVM machine...
	I0815 00:22:41.044534   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found existing default KVM network
	I0815 00:22:41.044709   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found existing private KVM network mk-ha-863044
	I0815 00:22:41.044838   30723 main.go:141] libmachine: (ha-863044-m03) Setting up store path in /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03 ...
	I0815 00:22:41.044858   30723 main.go:141] libmachine: (ha-863044-m03) Building disk image from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 00:22:41.044917   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:41.044841   31483 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:22:41.045021   30723 main.go:141] libmachine: (ha-863044-m03) Downloading /home/jenkins/minikube-integration/19443-13088/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 00:22:41.269348   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:41.269218   31483 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa...
	I0815 00:22:41.379165   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:41.379064   31483 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/ha-863044-m03.rawdisk...
	I0815 00:22:41.379193   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Writing magic tar header
	I0815 00:22:41.379207   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Writing SSH key tar header
	I0815 00:22:41.379218   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:41.379188   31483 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03 ...
	I0815 00:22:41.379321   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03
	I0815 00:22:41.379346   30723 main.go:141] libmachine: (ha-863044-m03) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03 (perms=drwx------)
	I0815 00:22:41.379361   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines
	I0815 00:22:41.379386   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:22:41.379400   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088
	I0815 00:22:41.379417   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 00:22:41.379434   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home/jenkins
	I0815 00:22:41.379450   30723 main.go:141] libmachine: (ha-863044-m03) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines (perms=drwxr-xr-x)
	I0815 00:22:41.379466   30723 main.go:141] libmachine: (ha-863044-m03) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube (perms=drwxr-xr-x)
	I0815 00:22:41.379481   30723 main.go:141] libmachine: (ha-863044-m03) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088 (perms=drwxrwxr-x)
	I0815 00:22:41.379495   30723 main.go:141] libmachine: (ha-863044-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 00:22:41.379508   30723 main.go:141] libmachine: (ha-863044-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 00:22:41.379520   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Checking permissions on dir: /home
	I0815 00:22:41.379532   30723 main.go:141] libmachine: (ha-863044-m03) Creating domain...
	I0815 00:22:41.379558   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Skipping /home - not owner
	I0815 00:22:41.380342   30723 main.go:141] libmachine: (ha-863044-m03) define libvirt domain using xml: 
	I0815 00:22:41.380365   30723 main.go:141] libmachine: (ha-863044-m03) <domain type='kvm'>
	I0815 00:22:41.380375   30723 main.go:141] libmachine: (ha-863044-m03)   <name>ha-863044-m03</name>
	I0815 00:22:41.380384   30723 main.go:141] libmachine: (ha-863044-m03)   <memory unit='MiB'>2200</memory>
	I0815 00:22:41.380393   30723 main.go:141] libmachine: (ha-863044-m03)   <vcpu>2</vcpu>
	I0815 00:22:41.380399   30723 main.go:141] libmachine: (ha-863044-m03)   <features>
	I0815 00:22:41.380408   30723 main.go:141] libmachine: (ha-863044-m03)     <acpi/>
	I0815 00:22:41.380413   30723 main.go:141] libmachine: (ha-863044-m03)     <apic/>
	I0815 00:22:41.380418   30723 main.go:141] libmachine: (ha-863044-m03)     <pae/>
	I0815 00:22:41.380426   30723 main.go:141] libmachine: (ha-863044-m03)     
	I0815 00:22:41.380436   30723 main.go:141] libmachine: (ha-863044-m03)   </features>
	I0815 00:22:41.380451   30723 main.go:141] libmachine: (ha-863044-m03)   <cpu mode='host-passthrough'>
	I0815 00:22:41.380463   30723 main.go:141] libmachine: (ha-863044-m03)   
	I0815 00:22:41.380474   30723 main.go:141] libmachine: (ha-863044-m03)   </cpu>
	I0815 00:22:41.380486   30723 main.go:141] libmachine: (ha-863044-m03)   <os>
	I0815 00:22:41.380496   30723 main.go:141] libmachine: (ha-863044-m03)     <type>hvm</type>
	I0815 00:22:41.380505   30723 main.go:141] libmachine: (ha-863044-m03)     <boot dev='cdrom'/>
	I0815 00:22:41.380515   30723 main.go:141] libmachine: (ha-863044-m03)     <boot dev='hd'/>
	I0815 00:22:41.380537   30723 main.go:141] libmachine: (ha-863044-m03)     <bootmenu enable='no'/>
	I0815 00:22:41.380548   30723 main.go:141] libmachine: (ha-863044-m03)   </os>
	I0815 00:22:41.380553   30723 main.go:141] libmachine: (ha-863044-m03)   <devices>
	I0815 00:22:41.380561   30723 main.go:141] libmachine: (ha-863044-m03)     <disk type='file' device='cdrom'>
	I0815 00:22:41.380570   30723 main.go:141] libmachine: (ha-863044-m03)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/boot2docker.iso'/>
	I0815 00:22:41.380577   30723 main.go:141] libmachine: (ha-863044-m03)       <target dev='hdc' bus='scsi'/>
	I0815 00:22:41.380583   30723 main.go:141] libmachine: (ha-863044-m03)       <readonly/>
	I0815 00:22:41.380590   30723 main.go:141] libmachine: (ha-863044-m03)     </disk>
	I0815 00:22:41.380596   30723 main.go:141] libmachine: (ha-863044-m03)     <disk type='file' device='disk'>
	I0815 00:22:41.380604   30723 main.go:141] libmachine: (ha-863044-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 00:22:41.380615   30723 main.go:141] libmachine: (ha-863044-m03)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/ha-863044-m03.rawdisk'/>
	I0815 00:22:41.380625   30723 main.go:141] libmachine: (ha-863044-m03)       <target dev='hda' bus='virtio'/>
	I0815 00:22:41.380647   30723 main.go:141] libmachine: (ha-863044-m03)     </disk>
	I0815 00:22:41.380686   30723 main.go:141] libmachine: (ha-863044-m03)     <interface type='network'>
	I0815 00:22:41.380698   30723 main.go:141] libmachine: (ha-863044-m03)       <source network='mk-ha-863044'/>
	I0815 00:22:41.380705   30723 main.go:141] libmachine: (ha-863044-m03)       <model type='virtio'/>
	I0815 00:22:41.380714   30723 main.go:141] libmachine: (ha-863044-m03)     </interface>
	I0815 00:22:41.380720   30723 main.go:141] libmachine: (ha-863044-m03)     <interface type='network'>
	I0815 00:22:41.380728   30723 main.go:141] libmachine: (ha-863044-m03)       <source network='default'/>
	I0815 00:22:41.380732   30723 main.go:141] libmachine: (ha-863044-m03)       <model type='virtio'/>
	I0815 00:22:41.380740   30723 main.go:141] libmachine: (ha-863044-m03)     </interface>
	I0815 00:22:41.380745   30723 main.go:141] libmachine: (ha-863044-m03)     <serial type='pty'>
	I0815 00:22:41.380751   30723 main.go:141] libmachine: (ha-863044-m03)       <target port='0'/>
	I0815 00:22:41.380760   30723 main.go:141] libmachine: (ha-863044-m03)     </serial>
	I0815 00:22:41.380770   30723 main.go:141] libmachine: (ha-863044-m03)     <console type='pty'>
	I0815 00:22:41.380783   30723 main.go:141] libmachine: (ha-863044-m03)       <target type='serial' port='0'/>
	I0815 00:22:41.380791   30723 main.go:141] libmachine: (ha-863044-m03)     </console>
	I0815 00:22:41.380803   30723 main.go:141] libmachine: (ha-863044-m03)     <rng model='virtio'>
	I0815 00:22:41.380814   30723 main.go:141] libmachine: (ha-863044-m03)       <backend model='random'>/dev/random</backend>
	I0815 00:22:41.380825   30723 main.go:141] libmachine: (ha-863044-m03)     </rng>
	I0815 00:22:41.380832   30723 main.go:141] libmachine: (ha-863044-m03)     
	I0815 00:22:41.380836   30723 main.go:141] libmachine: (ha-863044-m03)     
	I0815 00:22:41.380849   30723 main.go:141] libmachine: (ha-863044-m03)   </devices>
	I0815 00:22:41.380860   30723 main.go:141] libmachine: (ha-863044-m03) </domain>
	I0815 00:22:41.380871   30723 main.go:141] libmachine: (ha-863044-m03) 
	I0815 00:22:41.387469   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:a4:a0:77 in network default
	I0815 00:22:41.388017   30723 main.go:141] libmachine: (ha-863044-m03) Ensuring networks are active...
	I0815 00:22:41.388036   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:41.388766   30723 main.go:141] libmachine: (ha-863044-m03) Ensuring network default is active
	I0815 00:22:41.389100   30723 main.go:141] libmachine: (ha-863044-m03) Ensuring network mk-ha-863044 is active
	I0815 00:22:41.389419   30723 main.go:141] libmachine: (ha-863044-m03) Getting domain xml...
	I0815 00:22:41.390092   30723 main.go:141] libmachine: (ha-863044-m03) Creating domain...
	I0815 00:22:42.603059   30723 main.go:141] libmachine: (ha-863044-m03) Waiting to get IP...
	I0815 00:22:42.603812   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:42.604183   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:42.604214   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:42.604174   31483 retry.go:31] will retry after 234.358514ms: waiting for machine to come up
	I0815 00:22:42.840754   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:42.841084   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:42.841106   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:42.841048   31483 retry.go:31] will retry after 349.958791ms: waiting for machine to come up
	I0815 00:22:43.192467   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:43.192863   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:43.192890   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:43.192820   31483 retry.go:31] will retry after 358.098773ms: waiting for machine to come up
	I0815 00:22:43.552357   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:43.552797   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:43.552820   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:43.552770   31483 retry.go:31] will retry after 600.033913ms: waiting for machine to come up
	I0815 00:22:44.153805   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:44.154202   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:44.154228   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:44.154156   31483 retry.go:31] will retry after 616.990211ms: waiting for machine to come up
	I0815 00:22:44.773276   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:44.773815   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:44.773844   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:44.773763   31483 retry.go:31] will retry after 631.014269ms: waiting for machine to come up
	I0815 00:22:45.406591   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:45.407103   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:45.407129   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:45.407057   31483 retry.go:31] will retry after 1.084067737s: waiting for machine to come up
	I0815 00:22:46.493045   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:46.493493   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:46.493520   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:46.493458   31483 retry.go:31] will retry after 1.084636321s: waiting for machine to come up
	I0815 00:22:47.579722   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:47.580142   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:47.580174   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:47.580088   31483 retry.go:31] will retry after 1.283830855s: waiting for machine to come up
	I0815 00:22:48.867178   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:48.867702   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:48.867733   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:48.867654   31483 retry.go:31] will retry after 1.554254773s: waiting for machine to come up
	I0815 00:22:50.423320   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:50.423781   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:50.423808   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:50.423725   31483 retry.go:31] will retry after 1.892180005s: waiting for machine to come up
	I0815 00:22:52.317816   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:52.318256   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:52.318280   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:52.318200   31483 retry.go:31] will retry after 2.515000093s: waiting for machine to come up
	I0815 00:22:54.835775   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:54.836120   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:54.836144   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:54.836089   31483 retry.go:31] will retry after 3.437903548s: waiting for machine to come up
	I0815 00:22:58.277292   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:22:58.277724   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find current IP address of domain ha-863044-m03 in network mk-ha-863044
	I0815 00:22:58.277782   30723 main.go:141] libmachine: (ha-863044-m03) DBG | I0815 00:22:58.277716   31483 retry.go:31] will retry after 4.166628489s: waiting for machine to come up
	I0815 00:23:02.445716   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.446135   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has current primary IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.446150   30723 main.go:141] libmachine: (ha-863044-m03) Found IP for machine: 192.168.39.30
	I0815 00:23:02.446160   30723 main.go:141] libmachine: (ha-863044-m03) Reserving static IP address...
	I0815 00:23:02.446566   30723 main.go:141] libmachine: (ha-863044-m03) DBG | unable to find host DHCP lease matching {name: "ha-863044-m03", mac: "52:54:00:5e:df:2b", ip: "192.168.39.30"} in network mk-ha-863044
	I0815 00:23:02.520969   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Getting to WaitForSSH function...
	I0815 00:23:02.521002   30723 main.go:141] libmachine: (ha-863044-m03) Reserved static IP address: 192.168.39.30
	I0815 00:23:02.521015   30723 main.go:141] libmachine: (ha-863044-m03) Waiting for SSH to be available...
	I0815 00:23:02.523316   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.523676   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:02.523710   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.523874   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Using SSH client type: external
	I0815 00:23:02.523900   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa (-rw-------)
	I0815 00:23:02.523933   30723 main.go:141] libmachine: (ha-863044-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 00:23:02.523951   30723 main.go:141] libmachine: (ha-863044-m03) DBG | About to run SSH command:
	I0815 00:23:02.523965   30723 main.go:141] libmachine: (ha-863044-m03) DBG | exit 0
	I0815 00:23:02.644472   30723 main.go:141] libmachine: (ha-863044-m03) DBG | SSH cmd err, output: <nil>: 
	I0815 00:23:02.644771   30723 main.go:141] libmachine: (ha-863044-m03) KVM machine creation complete!
	I0815 00:23:02.645105   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetConfigRaw
	I0815 00:23:02.645586   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:02.645787   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:02.645926   30723 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 00:23:02.645942   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetState
	I0815 00:23:02.647102   30723 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 00:23:02.647115   30723 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 00:23:02.647122   30723 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 00:23:02.647130   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:02.649413   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.649805   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:02.649830   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.650044   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:02.650233   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.650405   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.650535   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:02.650733   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:23:02.650939   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0815 00:23:02.650953   30723 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 00:23:02.755712   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:23:02.755729   30723 main.go:141] libmachine: Detecting the provisioner...
	I0815 00:23:02.755737   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:02.758198   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.758550   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:02.758577   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.758737   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:02.758923   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.759080   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.759220   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:02.759374   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:23:02.759574   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0815 00:23:02.759588   30723 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 00:23:02.860851   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 00:23:02.860922   30723 main.go:141] libmachine: found compatible host: buildroot
	I0815 00:23:02.860938   30723 main.go:141] libmachine: Provisioning with buildroot...
	I0815 00:23:02.860951   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetMachineName
	I0815 00:23:02.861185   30723 buildroot.go:166] provisioning hostname "ha-863044-m03"
	I0815 00:23:02.861207   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetMachineName
	I0815 00:23:02.861364   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:02.863861   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.864294   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:02.864314   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.864460   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:02.864632   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.864784   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.864892   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:02.865031   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:23:02.865209   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0815 00:23:02.865219   30723 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863044-m03 && echo "ha-863044-m03" | sudo tee /etc/hostname
	I0815 00:23:02.977169   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863044-m03
	
	I0815 00:23:02.977194   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:02.979736   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.980092   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:02.980120   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:02.980281   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:02.980453   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.980588   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:02.980714   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:02.980875   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:23:02.981037   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0815 00:23:02.981059   30723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863044-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863044-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863044-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:23:03.088946   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:23:03.088969   30723 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 00:23:03.088982   30723 buildroot.go:174] setting up certificates
	I0815 00:23:03.088990   30723 provision.go:84] configureAuth start
	I0815 00:23:03.088998   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetMachineName
	I0815 00:23:03.089290   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:23:03.092163   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.092527   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.092559   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.092709   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:03.094875   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.095171   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.095195   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.095365   30723 provision.go:143] copyHostCerts
	I0815 00:23:03.095394   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:23:03.095425   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 00:23:03.095433   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:23:03.095497   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 00:23:03.095564   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:23:03.095581   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 00:23:03.095589   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:23:03.095613   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 00:23:03.095662   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:23:03.095679   30723 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 00:23:03.095686   30723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:23:03.095708   30723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 00:23:03.095756   30723 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.ha-863044-m03 san=[127.0.0.1 192.168.39.30 ha-863044-m03 localhost minikube]
	I0815 00:23:03.155012   30723 provision.go:177] copyRemoteCerts
	I0815 00:23:03.155061   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:23:03.155083   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:03.157492   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.157819   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.157846   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.157993   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:03.158161   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.158309   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:03.158462   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:23:03.238464   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 00:23:03.238527   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:23:03.262331   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 00:23:03.262400   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 00:23:03.286135   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 00:23:03.286199   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 00:23:03.310148   30723 provision.go:87] duration metric: took 221.143534ms to configureAuth
	I0815 00:23:03.310175   30723 buildroot.go:189] setting minikube options for container-runtime
	I0815 00:23:03.310352   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:23:03.310416   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:03.312961   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.313337   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.313365   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.313513   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:03.313696   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.313882   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.314028   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:03.314215   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:23:03.314406   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0815 00:23:03.314426   30723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:23:03.577378   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:23:03.577409   30723 main.go:141] libmachine: Checking connection to Docker...
	I0815 00:23:03.577420   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetURL
	I0815 00:23:03.578583   30723 main.go:141] libmachine: (ha-863044-m03) DBG | Using libvirt version 6000000
	I0815 00:23:03.580950   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.581334   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.581363   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.581524   30723 main.go:141] libmachine: Docker is up and running!
	I0815 00:23:03.581540   30723 main.go:141] libmachine: Reticulating splines...
	I0815 00:23:03.581548   30723 client.go:171] duration metric: took 22.538971017s to LocalClient.Create
	I0815 00:23:03.581573   30723 start.go:167] duration metric: took 22.539045128s to libmachine.API.Create "ha-863044"
	I0815 00:23:03.581584   30723 start.go:293] postStartSetup for "ha-863044-m03" (driver="kvm2")
	I0815 00:23:03.581597   30723 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:23:03.581618   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:03.581839   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:23:03.581865   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:03.583908   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.584264   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.584291   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.584411   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:03.584570   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.584744   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:03.584920   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:23:03.665974   30723 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:23:03.669868   30723 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 00:23:03.669891   30723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 00:23:03.669944   30723 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 00:23:03.670012   30723 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 00:23:03.670021   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /etc/ssl/certs/202792.pem
	I0815 00:23:03.670098   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 00:23:03.678728   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:23:03.700112   30723 start.go:296] duration metric: took 118.515675ms for postStartSetup
	I0815 00:23:03.700152   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetConfigRaw
	I0815 00:23:03.700769   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:23:03.703245   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.703600   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.703630   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.703842   30723 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:23:03.704015   30723 start.go:128] duration metric: took 22.679361913s to createHost
	I0815 00:23:03.704037   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:03.706285   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.706611   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.706637   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.706779   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:03.706909   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.707039   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.707139   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:03.707282   30723 main.go:141] libmachine: Using SSH client type: native
	I0815 00:23:03.707441   30723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0815 00:23:03.707452   30723 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 00:23:03.804906   30723 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723681383.766307938
	
	I0815 00:23:03.804927   30723 fix.go:216] guest clock: 1723681383.766307938
	I0815 00:23:03.804935   30723 fix.go:229] Guest: 2024-08-15 00:23:03.766307938 +0000 UTC Remote: 2024-08-15 00:23:03.704024469 +0000 UTC m=+145.856173876 (delta=62.283469ms)
	I0815 00:23:03.804950   30723 fix.go:200] guest clock delta is within tolerance: 62.283469ms
	I0815 00:23:03.804954   30723 start.go:83] releasing machines lock for "ha-863044-m03", held for 22.780400611s
	I0815 00:23:03.804971   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:03.805256   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:23:03.807665   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.808040   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.808058   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.810229   30723 out.go:177] * Found network options:
	I0815 00:23:03.811510   30723 out.go:177]   - NO_PROXY=192.168.39.6,192.168.39.170
	W0815 00:23:03.812593   30723 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 00:23:03.812609   30723 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 00:23:03.812619   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:03.813209   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:03.813379   30723 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:23:03.813465   30723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:23:03.813510   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	W0815 00:23:03.813541   30723 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 00:23:03.813564   30723 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 00:23:03.813630   30723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:23:03.813648   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:23:03.816313   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.816445   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.816698   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.816723   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.816768   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:03.816796   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:03.816872   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:03.817049   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.817073   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:23:03.817207   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:23:03.817208   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:03.817370   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:23:03.817399   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:23:03.817532   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:23:04.045451   30723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 00:23:04.051702   30723 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 00:23:04.051766   30723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:23:04.067872   30723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 00:23:04.067891   30723 start.go:495] detecting cgroup driver to use...
	I0815 00:23:04.067952   30723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:23:04.083179   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:23:04.095780   30723 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:23:04.095834   30723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:23:04.108241   30723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:23:04.121145   30723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:23:04.242613   30723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:23:04.399000   30723 docker.go:233] disabling docker service ...
	I0815 00:23:04.399082   30723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:23:04.413030   30723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:23:04.424872   30723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:23:04.534438   30723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:23:04.641008   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:23:04.654571   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:23:04.671767   30723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:23:04.671847   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.681525   30723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:23:04.681592   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.691399   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.702111   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.711792   30723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:23:04.721433   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.730986   30723 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.749433   30723 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:23:04.760129   30723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:23:04.769285   30723 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 00:23:04.769348   30723 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 00:23:04.782190   30723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:23:04.791844   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:23:04.899751   30723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:23:05.032342   30723 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:23:05.032429   30723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:23:05.036908   30723 start.go:563] Will wait 60s for crictl version
	I0815 00:23:05.036962   30723 ssh_runner.go:195] Run: which crictl
	I0815 00:23:05.040405   30723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:23:05.082663   30723 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 00:23:05.082730   30723 ssh_runner.go:195] Run: crio --version
	I0815 00:23:05.112643   30723 ssh_runner.go:195] Run: crio --version
	I0815 00:23:05.141341   30723 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 00:23:05.142668   30723 out.go:177]   - env NO_PROXY=192.168.39.6
	I0815 00:23:05.143850   30723 out.go:177]   - env NO_PROXY=192.168.39.6,192.168.39.170
	I0815 00:23:05.144851   30723 main.go:141] libmachine: (ha-863044-m03) Calling .GetIP
	I0815 00:23:05.147297   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:05.147618   30723 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:23:05.147654   30723 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:23:05.147836   30723 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 00:23:05.151706   30723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:23:05.163415   30723 mustload.go:65] Loading cluster: ha-863044
	I0815 00:23:05.163668   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:23:05.163947   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:23:05.163995   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:23:05.180222   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43517
	I0815 00:23:05.180631   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:23:05.181091   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:23:05.181112   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:23:05.181430   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:23:05.181634   30723 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:23:05.183073   30723 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:23:05.183408   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:23:05.183440   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:23:05.198183   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35343
	I0815 00:23:05.198572   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:23:05.199070   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:23:05.199094   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:23:05.199409   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:23:05.199593   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:23:05.199723   30723 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044 for IP: 192.168.39.30
	I0815 00:23:05.199734   30723 certs.go:194] generating shared ca certs ...
	I0815 00:23:05.199747   30723 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:23:05.199856   30723 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 00:23:05.199892   30723 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 00:23:05.199900   30723 certs.go:256] generating profile certs ...
	I0815 00:23:05.199962   30723 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key
	I0815 00:23:05.199986   30723 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.fb5d4460
	I0815 00:23:05.200002   30723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.fb5d4460 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.170 192.168.39.30 192.168.39.254]
	I0815 00:23:05.294220   30723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.fb5d4460 ...
	I0815 00:23:05.294249   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.fb5d4460: {Name:mk0950b6d97069d8aa367779aabd7a73d7c2423e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:23:05.294422   30723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.fb5d4460 ...
	I0815 00:23:05.294434   30723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.fb5d4460: {Name:mka467de40a002e45b894a979d221dbb7b5a2008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:23:05.294503   30723 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.fb5d4460 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt
	I0815 00:23:05.294634   30723 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.fb5d4460 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key
	I0815 00:23:05.294829   30723 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key
	I0815 00:23:05.294850   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 00:23:05.294880   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 00:23:05.294894   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 00:23:05.294906   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 00:23:05.294918   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 00:23:05.294931   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 00:23:05.294943   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 00:23:05.294953   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 00:23:05.295019   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 00:23:05.295049   30723 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 00:23:05.295059   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:23:05.295079   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:23:05.295100   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:23:05.295123   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 00:23:05.295168   30723 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:23:05.295193   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem -> /usr/share/ca-certificates/20279.pem
	I0815 00:23:05.295205   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /usr/share/ca-certificates/202792.pem
	I0815 00:23:05.295215   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:23:05.295244   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:23:05.298013   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:23:05.298361   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:23:05.298386   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:23:05.298543   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:23:05.298708   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:23:05.298874   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:23:05.298992   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:23:05.376939   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0815 00:23:05.381995   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 00:23:05.393346   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0815 00:23:05.397855   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0815 00:23:05.408041   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 00:23:05.411683   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 00:23:05.420961   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0815 00:23:05.424606   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 00:23:05.433785   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0815 00:23:05.437463   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 00:23:05.446772   30723 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0815 00:23:05.450349   30723 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0815 00:23:05.460013   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:23:05.483481   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:23:05.505263   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:23:05.526935   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:23:05.549754   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0815 00:23:05.571986   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:23:05.603518   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:23:05.625444   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 00:23:05.647240   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 00:23:05.669239   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 00:23:05.690391   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:23:05.713453   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 00:23:05.728875   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0815 00:23:05.744592   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 00:23:05.759747   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 00:23:05.774921   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 00:23:05.789979   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0815 00:23:05.805061   30723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 00:23:05.821915   30723 ssh_runner.go:195] Run: openssl version
	I0815 00:23:05.827613   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 00:23:05.840340   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 00:23:05.844450   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 00:23:05.844499   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 00:23:05.850019   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 00:23:05.861182   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:23:05.872496   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:23:05.876597   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:23:05.876644   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:23:05.881951   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:23:05.893309   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 00:23:05.903270   30723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 00:23:05.907051   30723 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 00:23:05.907098   30723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 00:23:05.912365   30723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 00:23:05.924899   30723 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:23:05.928787   30723 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 00:23:05.928833   30723 kubeadm.go:934] updating node {m03 192.168.39.30 8443 v1.31.0 crio true true} ...
	I0815 00:23:05.928904   30723 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863044-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:23:05.928929   30723 kube-vip.go:115] generating kube-vip config ...
	I0815 00:23:05.928957   30723 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 00:23:05.945776   30723 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 00:23:05.945826   30723 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 00:23:05.945869   30723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:23:05.954537   30723 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0815 00:23:05.954590   30723 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0815 00:23:05.963254   30723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0815 00:23:05.963279   30723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0815 00:23:05.963297   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 00:23:05.963283   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 00:23:05.963254   30723 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0815 00:23:05.963372   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 00:23:05.963407   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:23:05.963430   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 00:23:05.971891   30723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0815 00:23:05.971920   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0815 00:23:05.982514   30723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0815 00:23:05.982547   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0815 00:23:05.982523   30723 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 00:23:05.982664   30723 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 00:23:06.032819   30723 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0815 00:23:06.032865   30723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0815 00:23:06.771496   30723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 00:23:06.780671   30723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0815 00:23:06.797055   30723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:23:06.814947   30723 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 00:23:06.832182   30723 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 00:23:06.835880   30723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 00:23:06.848417   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:23:06.971574   30723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:23:06.989270   30723 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:23:06.989750   30723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:23:06.989797   30723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:23:07.004926   30723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44629
	I0815 00:23:07.005394   30723 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:23:07.005901   30723 main.go:141] libmachine: Using API Version  1
	I0815 00:23:07.005925   30723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:23:07.006221   30723 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:23:07.006420   30723 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:23:07.006531   30723 start.go:317] joinCluster: &{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:23:07.006707   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0815 00:23:07.006729   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:23:07.009661   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:23:07.010105   30723 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:23:07.010128   30723 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:23:07.010269   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:23:07.010428   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:23:07.010593   30723 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:23:07.010745   30723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:23:07.159544   30723 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:23:07.159590   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z8bf15.z1raht0f1z3edyo5 --discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863044-m03 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443"
	I0815 00:23:28.183777   30723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z8bf15.z1raht0f1z3edyo5 --discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-863044-m03 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443": (21.024162503s)
	I0815 00:23:28.183819   30723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0815 00:23:28.752616   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-863044-m03 minikube.k8s.io/updated_at=2024_08_15T00_23_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=ha-863044 minikube.k8s.io/primary=false
	I0815 00:23:28.868400   30723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-863044-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0815 00:23:28.986224   30723 start.go:319] duration metric: took 21.979685924s to joinCluster
	I0815 00:23:28.986308   30723 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 00:23:28.986655   30723 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:23:28.987863   30723 out.go:177] * Verifying Kubernetes components...
	I0815 00:23:28.989030   30723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:23:29.239801   30723 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:23:29.261020   30723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:23:29.261366   30723 kapi.go:59] client config for ha-863044: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.crt", KeyFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key", CAFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 00:23:29.261442   30723 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.6:8443
	I0815 00:23:29.261706   30723 node_ready.go:35] waiting up to 6m0s for node "ha-863044-m03" to be "Ready" ...
	I0815 00:23:29.261790   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:29.261803   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:29.261814   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:29.261819   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:29.265217   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:29.762201   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:29.762221   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:29.762267   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:29.762275   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:29.765605   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:30.262850   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:30.262876   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:30.262887   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:30.262893   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:30.266850   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:30.762218   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:30.762244   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:30.762256   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:30.762264   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:30.765387   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:31.261951   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:31.261972   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:31.261979   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:31.261983   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:31.264871   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:31.265391   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:31.762374   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:31.762395   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:31.762403   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:31.762407   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:31.765551   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:32.262782   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:32.262804   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:32.262814   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:32.262821   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:32.266272   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:32.761957   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:32.761980   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:32.761990   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:32.761996   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:32.765626   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:33.262203   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:33.262227   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:33.262236   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:33.262240   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:33.265402   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:33.265989   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:33.762294   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:33.762320   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:33.762331   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:33.762337   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:33.765600   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:34.262715   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:34.262742   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:34.262754   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:34.262760   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:34.266416   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:34.762377   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:34.762401   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:34.762409   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:34.762415   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:34.765678   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:35.262118   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:35.262139   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:35.262149   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:35.262153   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:35.265175   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:35.762531   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:35.762558   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:35.762569   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:35.762574   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:35.766589   30723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 00:23:35.767121   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:36.262355   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:36.262381   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:36.262392   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:36.262399   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:36.265426   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:36.762241   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:36.762267   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:36.762275   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:36.762278   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:36.765463   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:37.262753   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:37.262774   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:37.262782   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:37.262788   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:37.265905   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:37.761868   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:37.761896   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:37.761915   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:37.761921   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:37.764397   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:38.261984   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:38.262005   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:38.262013   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:38.262018   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:38.265252   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:38.265721   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:38.762095   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:38.762116   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:38.762125   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:38.762128   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:38.765257   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:39.262271   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:39.262292   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:39.262300   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:39.262304   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:39.265431   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:39.762336   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:39.762356   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:39.762365   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:39.762369   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:39.765460   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:40.261997   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:40.262021   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:40.262032   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:40.262037   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:40.265626   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:40.266146   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:40.761914   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:40.761940   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:40.761948   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:40.761953   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:40.765018   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:41.262822   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:41.262843   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:41.262850   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:41.262857   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:41.266341   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:41.762252   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:41.762273   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:41.762281   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:41.762285   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:41.765201   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:42.262441   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:42.262462   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:42.262470   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:42.262474   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:42.266072   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:42.266714   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:42.762042   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:42.762064   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:42.762071   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:42.762075   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:42.764954   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:43.262497   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:43.262517   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:43.262526   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:43.262531   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:43.265650   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:43.762580   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:43.762600   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:43.762607   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:43.762612   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:43.765535   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:44.261983   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:44.262004   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:44.262011   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:44.262016   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:44.265367   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:44.762525   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:44.762549   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:44.762560   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:44.762566   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:44.765739   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:44.766328   30723 node_ready.go:53] node "ha-863044-m03" has status "Ready":"False"
	I0815 00:23:45.262307   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:45.262328   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:45.262335   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:45.262339   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:45.265414   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:45.762870   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:45.762903   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:45.762911   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:45.762915   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:45.765898   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:46.262664   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:46.262686   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.262697   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.262703   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.267191   30723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 00:23:46.762403   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:46.762425   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.762433   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.762436   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.766020   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:46.766591   30723 node_ready.go:49] node "ha-863044-m03" has status "Ready":"True"
	I0815 00:23:46.766614   30723 node_ready.go:38] duration metric: took 17.504893196s for node "ha-863044-m03" to be "Ready" ...
	I0815 00:23:46.766621   30723 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:23:46.766675   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:23:46.766685   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.766692   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.766696   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.771757   30723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 00:23:46.778225   30723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-bc2jh" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.778300   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-bc2jh
	I0815 00:23:46.778310   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.778317   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.778320   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.780721   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:46.781337   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:46.781351   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.781358   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.781363   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.783502   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:46.784055   30723 pod_ready.go:92] pod "coredns-6f6b679f8f-bc2jh" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:46.784074   30723 pod_ready.go:81] duration metric: took 5.82559ms for pod "coredns-6f6b679f8f-bc2jh" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.784082   30723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-jxpqd" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.784134   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-jxpqd
	I0815 00:23:46.784143   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.784150   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.784159   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.786322   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:46.786834   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:46.786848   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.786855   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.786859   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.788908   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:46.789381   30723 pod_ready.go:92] pod "coredns-6f6b679f8f-jxpqd" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:46.789399   30723 pod_ready.go:81] duration metric: took 5.309653ms for pod "coredns-6f6b679f8f-jxpqd" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.789410   30723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.789460   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863044
	I0815 00:23:46.789471   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.789481   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.789490   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.791392   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:23:46.791995   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:46.792013   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.792024   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.792032   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.794092   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:46.794448   30723 pod_ready.go:92] pod "etcd-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:46.794464   30723 pod_ready.go:81] duration metric: took 5.043831ms for pod "etcd-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.794471   30723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.794507   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863044-m02
	I0815 00:23:46.794515   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.794520   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.794523   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.796416   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:23:46.796941   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:46.796957   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.796963   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.796968   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.798918   30723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 00:23:46.799280   30723 pod_ready.go:92] pod "etcd-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:46.799297   30723 pod_ready.go:81] duration metric: took 4.820222ms for pod "etcd-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.799306   30723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:46.963197   30723 request.go:632] Waited for 163.828732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863044-m03
	I0815 00:23:46.963262   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/etcd-ha-863044-m03
	I0815 00:23:46.963268   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:46.963275   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:46.963287   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:46.966247   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:47.163274   30723 request.go:632] Waited for 196.370188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:47.163343   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:47.163351   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:47.163364   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:47.163375   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:47.165860   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:47.166257   30723 pod_ready.go:92] pod "etcd-ha-863044-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:47.166277   30723 pod_ready.go:81] duration metric: took 366.963774ms for pod "etcd-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:47.166297   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:47.362470   30723 request.go:632] Waited for 196.093871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044
	I0815 00:23:47.362528   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044
	I0815 00:23:47.362535   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:47.362545   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:47.362554   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:47.365637   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:47.562825   30723 request.go:632] Waited for 196.401068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:47.562896   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:47.562901   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:47.562909   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:47.562913   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:47.565976   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:47.566634   30723 pod_ready.go:92] pod "kube-apiserver-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:47.566656   30723 pod_ready.go:81] duration metric: took 400.351897ms for pod "kube-apiserver-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:47.566669   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:47.762640   30723 request.go:632] Waited for 195.898128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044-m02
	I0815 00:23:47.762727   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044-m02
	I0815 00:23:47.762740   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:47.762751   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:47.762761   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:47.766059   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:47.963284   30723 request.go:632] Waited for 196.310541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:47.963366   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:47.963376   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:47.963386   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:47.963392   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:47.966509   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:47.967134   30723 pod_ready.go:92] pod "kube-apiserver-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:47.967151   30723 pod_ready.go:81] duration metric: took 400.470846ms for pod "kube-apiserver-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:47.967163   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:48.162740   30723 request.go:632] Waited for 195.501179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044-m03
	I0815 00:23:48.162820   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-863044-m03
	I0815 00:23:48.162830   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:48.162837   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:48.162841   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:48.165747   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:48.362867   30723 request.go:632] Waited for 196.34759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:48.362917   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:48.362923   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:48.362930   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:48.362936   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:48.366134   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:48.366679   30723 pod_ready.go:92] pod "kube-apiserver-ha-863044-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:48.366696   30723 pod_ready.go:81] duration metric: took 399.526483ms for pod "kube-apiserver-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:48.366713   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:48.562854   30723 request.go:632] Waited for 196.063266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044
	I0815 00:23:48.562903   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044
	I0815 00:23:48.562908   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:48.562916   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:48.562920   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:48.566154   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:48.763311   30723 request.go:632] Waited for 196.366786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:48.763407   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:48.763418   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:48.763429   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:48.763440   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:48.766790   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:48.767433   30723 pod_ready.go:92] pod "kube-controller-manager-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:48.767451   30723 pod_ready.go:81] duration metric: took 400.728441ms for pod "kube-controller-manager-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:48.767463   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:48.962407   30723 request.go:632] Waited for 194.882466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044-m02
	I0815 00:23:48.962482   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044-m02
	I0815 00:23:48.962487   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:48.962495   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:48.962502   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:48.965861   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:49.163177   30723 request.go:632] Waited for 196.351167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:49.163230   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:49.163236   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:49.163249   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:49.163270   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:49.166571   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:49.167130   30723 pod_ready.go:92] pod "kube-controller-manager-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:49.167148   30723 pod_ready.go:81] duration metric: took 399.677131ms for pod "kube-controller-manager-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:49.167159   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:49.363305   30723 request.go:632] Waited for 196.076477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044-m03
	I0815 00:23:49.363369   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-863044-m03
	I0815 00:23:49.363375   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:49.363383   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:49.363389   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:49.366479   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:49.562403   30723 request.go:632] Waited for 195.275827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:49.562477   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:49.562482   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:49.562490   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:49.562494   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:49.565661   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:49.566321   30723 pod_ready.go:92] pod "kube-controller-manager-ha-863044-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:49.566354   30723 pod_ready.go:81] duration metric: took 399.187513ms for pod "kube-controller-manager-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:49.566367   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6l4gp" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:49.763450   30723 request.go:632] Waited for 197.012223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6l4gp
	I0815 00:23:49.763536   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6l4gp
	I0815 00:23:49.763548   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:49.763559   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:49.763565   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:49.766755   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:49.962813   30723 request.go:632] Waited for 195.352835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:49.962880   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:49.962888   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:49.962901   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:49.962913   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:49.974265   30723 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0815 00:23:49.974888   30723 pod_ready.go:92] pod "kube-proxy-6l4gp" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:49.974915   30723 pod_ready.go:81] duration metric: took 408.539871ms for pod "kube-proxy-6l4gp" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:49.974929   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-758vr" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:50.162858   30723 request.go:632] Waited for 187.863713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-758vr
	I0815 00:23:50.162906   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-758vr
	I0815 00:23:50.162911   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:50.162918   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:50.162923   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:50.166036   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:50.362476   30723 request.go:632] Waited for 195.661693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:50.362524   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:50.362529   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:50.362536   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:50.362540   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:50.365821   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:50.366491   30723 pod_ready.go:92] pod "kube-proxy-758vr" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:50.366509   30723 pod_ready.go:81] duration metric: took 391.573753ms for pod "kube-proxy-758vr" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:50.366517   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qxmqn" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:50.563085   30723 request.go:632] Waited for 196.511211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxmqn
	I0815 00:23:50.563153   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxmqn
	I0815 00:23:50.563159   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:50.563167   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:50.563170   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:50.566786   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:50.762900   30723 request.go:632] Waited for 195.341406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:50.762963   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:50.762971   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:50.762983   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:50.762994   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:50.766297   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:50.766778   30723 pod_ready.go:92] pod "kube-proxy-qxmqn" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:50.766797   30723 pod_ready.go:81] duration metric: took 400.271262ms for pod "kube-proxy-qxmqn" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:50.766806   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:50.962948   30723 request.go:632] Waited for 196.051355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044
	I0815 00:23:50.963021   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044
	I0815 00:23:50.963029   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:50.963040   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:50.963047   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:50.966182   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:51.162480   30723 request.go:632] Waited for 195.656633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:51.162530   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044
	I0815 00:23:51.162535   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:51.162543   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:51.162548   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:51.165107   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:51.165684   30723 pod_ready.go:92] pod "kube-scheduler-ha-863044" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:51.165707   30723 pod_ready.go:81] duration metric: took 398.894169ms for pod "kube-scheduler-ha-863044" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:51.165718   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:51.362730   30723 request.go:632] Waited for 196.932795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044-m02
	I0815 00:23:51.362783   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044-m02
	I0815 00:23:51.362788   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:51.362796   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:51.362799   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:51.366771   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:51.562777   30723 request.go:632] Waited for 195.362919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:51.562881   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m02
	I0815 00:23:51.562891   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:51.562898   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:51.562904   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:51.565998   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:51.566525   30723 pod_ready.go:92] pod "kube-scheduler-ha-863044-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:51.566541   30723 pod_ready.go:81] duration metric: took 400.815114ms for pod "kube-scheduler-ha-863044-m02" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:51.566553   30723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:51.762645   30723 request.go:632] Waited for 196.027971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044-m03
	I0815 00:23:51.762711   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-863044-m03
	I0815 00:23:51.762717   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:51.762725   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:51.762732   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:51.765743   30723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 00:23:51.963320   30723 request.go:632] Waited for 196.731498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:51.963409   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes/ha-863044-m03
	I0815 00:23:51.963418   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:51.963429   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:51.963438   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:51.966817   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:51.967345   30723 pod_ready.go:92] pod "kube-scheduler-ha-863044-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 00:23:51.967368   30723 pod_ready.go:81] duration metric: took 400.803731ms for pod "kube-scheduler-ha-863044-m03" in "kube-system" namespace to be "Ready" ...
	I0815 00:23:51.967381   30723 pod_ready.go:38] duration metric: took 5.200749366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 00:23:51.967402   30723 api_server.go:52] waiting for apiserver process to appear ...
	I0815 00:23:51.967464   30723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:23:51.984625   30723 api_server.go:72] duration metric: took 22.998247596s to wait for apiserver process to appear ...
	I0815 00:23:51.984647   30723 api_server.go:88] waiting for apiserver healthz status ...
	I0815 00:23:51.984678   30723 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0815 00:23:51.988572   30723 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0815 00:23:51.988643   30723 round_trippers.go:463] GET https://192.168.39.6:8443/version
	I0815 00:23:51.988671   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:51.988683   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:51.988692   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:51.989499   30723 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 00:23:51.989551   30723 api_server.go:141] control plane version: v1.31.0
	I0815 00:23:51.989563   30723 api_server.go:131] duration metric: took 4.900846ms to wait for apiserver health ...
	I0815 00:23:51.989572   30723 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 00:23:52.163222   30723 request.go:632] Waited for 173.57961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:23:52.163285   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:23:52.163290   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:52.163298   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:52.163305   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:52.168452   30723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 00:23:52.174557   30723 system_pods.go:59] 24 kube-system pods found
	I0815 00:23:52.174584   30723 system_pods.go:61] "coredns-6f6b679f8f-bc2jh" [77760785-a989-4c45-a8e0-e758db3a252b] Running
	I0815 00:23:52.174589   30723 system_pods.go:61] "coredns-6f6b679f8f-jxpqd" [72e46071-4563-4c8c-a269-c32c4d0fced3] Running
	I0815 00:23:52.174592   30723 system_pods.go:61] "etcd-ha-863044" [e41d94d6-4a69-49a3-93bc-d726a95b08b2] Running
	I0815 00:23:52.174595   30723 system_pods.go:61] "etcd-ha-863044-m02" [1c022b82-287f-493c-89ff-3aa70264c39a] Running
	I0815 00:23:52.174598   30723 system_pods.go:61] "etcd-ha-863044-m03" [774efb6d-9c64-4d80-8bc0-54a8ee452346] Running
	I0815 00:23:52.174601   30723 system_pods.go:61] "kindnet-jdl2d" [f621eec7-2d0e-4f1f-83f3-7bc5a1322693] Running
	I0815 00:23:52.174603   30723 system_pods.go:61] "kindnet-ptbpb" [b1fee332-fbc7-4b7b-818a-9ba398dce43e] Running
	I0815 00:23:52.174606   30723 system_pods.go:61] "kindnet-xpnzd" [6cd2a4c8-3c5f-4860-90bb-23a8c6f72a15] Running
	I0815 00:23:52.174608   30723 system_pods.go:61] "kube-apiserver-ha-863044" [52bc4344-75cb-4659-a1df-db580ad5d026] Running
	I0815 00:23:52.174611   30723 system_pods.go:61] "kube-apiserver-ha-863044-m02" [087ef288-843d-44fc-9c5b-1b302f6d2906] Running
	I0815 00:23:52.174614   30723 system_pods.go:61] "kube-apiserver-ha-863044-m03" [aea4dcdd-c0d6-44d8-a02d-881b92de68d3] Running
	I0815 00:23:52.174617   30723 system_pods.go:61] "kube-controller-manager-ha-863044" [4539aebc-86af-4e9f-8736-348d90f3981d] Running
	I0815 00:23:52.174620   30723 system_pods.go:61] "kube-controller-manager-ha-863044-m02" [a0c27335-3bc0-4a2e-9875-0c736b47a4b1] Running
	I0815 00:23:52.174624   30723 system_pods.go:61] "kube-controller-manager-ha-863044-m03" [0ece8182-3a99-4f02-8ef7-d8ddbe2edf98] Running
	I0815 00:23:52.174628   30723 system_pods.go:61] "kube-proxy-6l4gp" [85ddf43f-82b7-4325-a5d8-d4f2242b4e7c] Running
	I0815 00:23:52.174634   30723 system_pods.go:61] "kube-proxy-758vr" [0963208c-92ef-4625-8805-1c8ad8ae7b51] Running
	I0815 00:23:52.174636   30723 system_pods.go:61] "kube-proxy-qxmqn" [c40bb19e-c0bd-43fb-bbfc-3c9dfcd2fbad] Running
	I0815 00:23:52.174640   30723 system_pods.go:61] "kube-scheduler-ha-863044" [84013745-813a-4eab-a9a5-6edd28301611] Running
	I0815 00:23:52.174642   30723 system_pods.go:61] "kube-scheduler-ha-863044-m02" [62650272-5fa7-4ff2-83b5-6cb6f84d497b] Running
	I0815 00:23:52.174645   30723 system_pods.go:61] "kube-scheduler-ha-863044-m03" [a5dad54e-959c-4bb1-ab47-9c952dac9926] Running
	I0815 00:23:52.174648   30723 system_pods.go:61] "kube-vip-ha-863044" [ff875a81-1ee8-4073-a666-4f9dc4239e38] Running
	I0815 00:23:52.174651   30723 system_pods.go:61] "kube-vip-ha-863044-m02" [e9f868e0-44af-4e2b-8699-a88d1a752594] Running
	I0815 00:23:52.174654   30723 system_pods.go:61] "kube-vip-ha-863044-m03" [b66363f1-db60-4f4b-8525-2d4c5366ceb4] Running
	I0815 00:23:52.174656   30723 system_pods.go:61] "storage-provisioner" [a7565569-2f8c-4393-b4f8-b8548d65f794] Running
	I0815 00:23:52.174662   30723 system_pods.go:74] duration metric: took 185.083199ms to wait for pod list to return data ...
	I0815 00:23:52.174672   30723 default_sa.go:34] waiting for default service account to be created ...
	I0815 00:23:52.363097   30723 request.go:632] Waited for 188.345607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0815 00:23:52.363164   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/default/serviceaccounts
	I0815 00:23:52.363176   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:52.363187   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:52.363197   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:52.366585   30723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 00:23:52.366696   30723 default_sa.go:45] found service account: "default"
	I0815 00:23:52.366711   30723 default_sa.go:55] duration metric: took 192.033273ms for default service account to be created ...
	I0815 00:23:52.366718   30723 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 00:23:52.563133   30723 request.go:632] Waited for 196.356112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:23:52.563221   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/namespaces/kube-system/pods
	I0815 00:23:52.563232   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:52.563244   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:52.563251   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:52.568835   30723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 00:23:52.576417   30723 system_pods.go:86] 24 kube-system pods found
	I0815 00:23:52.576442   30723 system_pods.go:89] "coredns-6f6b679f8f-bc2jh" [77760785-a989-4c45-a8e0-e758db3a252b] Running
	I0815 00:23:52.576448   30723 system_pods.go:89] "coredns-6f6b679f8f-jxpqd" [72e46071-4563-4c8c-a269-c32c4d0fced3] Running
	I0815 00:23:52.576453   30723 system_pods.go:89] "etcd-ha-863044" [e41d94d6-4a69-49a3-93bc-d726a95b08b2] Running
	I0815 00:23:52.576457   30723 system_pods.go:89] "etcd-ha-863044-m02" [1c022b82-287f-493c-89ff-3aa70264c39a] Running
	I0815 00:23:52.576461   30723 system_pods.go:89] "etcd-ha-863044-m03" [774efb6d-9c64-4d80-8bc0-54a8ee452346] Running
	I0815 00:23:52.576464   30723 system_pods.go:89] "kindnet-jdl2d" [f621eec7-2d0e-4f1f-83f3-7bc5a1322693] Running
	I0815 00:23:52.576468   30723 system_pods.go:89] "kindnet-ptbpb" [b1fee332-fbc7-4b7b-818a-9ba398dce43e] Running
	I0815 00:23:52.576472   30723 system_pods.go:89] "kindnet-xpnzd" [6cd2a4c8-3c5f-4860-90bb-23a8c6f72a15] Running
	I0815 00:23:52.576476   30723 system_pods.go:89] "kube-apiserver-ha-863044" [52bc4344-75cb-4659-a1df-db580ad5d026] Running
	I0815 00:23:52.576481   30723 system_pods.go:89] "kube-apiserver-ha-863044-m02" [087ef288-843d-44fc-9c5b-1b302f6d2906] Running
	I0815 00:23:52.576486   30723 system_pods.go:89] "kube-apiserver-ha-863044-m03" [aea4dcdd-c0d6-44d8-a02d-881b92de68d3] Running
	I0815 00:23:52.576490   30723 system_pods.go:89] "kube-controller-manager-ha-863044" [4539aebc-86af-4e9f-8736-348d90f3981d] Running
	I0815 00:23:52.576498   30723 system_pods.go:89] "kube-controller-manager-ha-863044-m02" [a0c27335-3bc0-4a2e-9875-0c736b47a4b1] Running
	I0815 00:23:52.576503   30723 system_pods.go:89] "kube-controller-manager-ha-863044-m03" [0ece8182-3a99-4f02-8ef7-d8ddbe2edf98] Running
	I0815 00:23:52.576509   30723 system_pods.go:89] "kube-proxy-6l4gp" [85ddf43f-82b7-4325-a5d8-d4f2242b4e7c] Running
	I0815 00:23:52.576513   30723 system_pods.go:89] "kube-proxy-758vr" [0963208c-92ef-4625-8805-1c8ad8ae7b51] Running
	I0815 00:23:52.576517   30723 system_pods.go:89] "kube-proxy-qxmqn" [c40bb19e-c0bd-43fb-bbfc-3c9dfcd2fbad] Running
	I0815 00:23:52.576522   30723 system_pods.go:89] "kube-scheduler-ha-863044" [84013745-813a-4eab-a9a5-6edd28301611] Running
	I0815 00:23:52.576526   30723 system_pods.go:89] "kube-scheduler-ha-863044-m02" [62650272-5fa7-4ff2-83b5-6cb6f84d497b] Running
	I0815 00:23:52.576531   30723 system_pods.go:89] "kube-scheduler-ha-863044-m03" [a5dad54e-959c-4bb1-ab47-9c952dac9926] Running
	I0815 00:23:52.576535   30723 system_pods.go:89] "kube-vip-ha-863044" [ff875a81-1ee8-4073-a666-4f9dc4239e38] Running
	I0815 00:23:52.576539   30723 system_pods.go:89] "kube-vip-ha-863044-m02" [e9f868e0-44af-4e2b-8699-a88d1a752594] Running
	I0815 00:23:52.576544   30723 system_pods.go:89] "kube-vip-ha-863044-m03" [b66363f1-db60-4f4b-8525-2d4c5366ceb4] Running
	I0815 00:23:52.576547   30723 system_pods.go:89] "storage-provisioner" [a7565569-2f8c-4393-b4f8-b8548d65f794] Running
	I0815 00:23:52.576553   30723 system_pods.go:126] duration metric: took 209.829403ms to wait for k8s-apps to be running ...
	I0815 00:23:52.576562   30723 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 00:23:52.576603   30723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:23:52.593088   30723 system_svc.go:56] duration metric: took 16.516305ms WaitForService to wait for kubelet
	I0815 00:23:52.593116   30723 kubeadm.go:582] duration metric: took 23.606742835s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:23:52.593134   30723 node_conditions.go:102] verifying NodePressure condition ...
	I0815 00:23:52.762489   30723 request.go:632] Waited for 169.272948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.6:8443/api/v1/nodes
	I0815 00:23:52.762543   30723 round_trippers.go:463] GET https://192.168.39.6:8443/api/v1/nodes
	I0815 00:23:52.762548   30723 round_trippers.go:469] Request Headers:
	I0815 00:23:52.762556   30723 round_trippers.go:473]     Accept: application/json, */*
	I0815 00:23:52.762559   30723 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 00:23:52.766816   30723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 00:23:52.768109   30723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 00:23:52.768129   30723 node_conditions.go:123] node cpu capacity is 2
	I0815 00:23:52.768140   30723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 00:23:52.768146   30723 node_conditions.go:123] node cpu capacity is 2
	I0815 00:23:52.768151   30723 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 00:23:52.768157   30723 node_conditions.go:123] node cpu capacity is 2
	I0815 00:23:52.768163   30723 node_conditions.go:105] duration metric: took 175.024259ms to run NodePressure ...
	I0815 00:23:52.768183   30723 start.go:241] waiting for startup goroutines ...
	I0815 00:23:52.768213   30723 start.go:255] writing updated cluster config ...
	I0815 00:23:52.768483   30723 ssh_runner.go:195] Run: rm -f paused
	I0815 00:23:52.817091   30723 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 00:23:52.818943   30723 out.go:177] * Done! kubectl is now configured to use "ha-863044" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.196735759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681708196711525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=293ea938-6489-48b8-89e0-5c1b4b477ecf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.197730087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f93069ca-ff28-4449-b5de-41287635d04e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.197807698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f93069ca-ff28-4449-b5de-41287635d04e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.198126834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681438171701468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c05051caebc6b89e60379c49e52352cbd01e34ef4efe6f58a5441cb275e051d,PodSandboxId:e6e8146f29bde538c7ae23bcea4317033e3c3f8902a557af46925d5710c262bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723681299723187197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299671457880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299673848624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a9
89-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723681287926625552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172368128
4364979513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67611ae45f1e5eeda73fa4909e4ae85ff1de3ce19a810bf0cb7140feb5211759,PodSandboxId:77e4316165593ea75a453c19c9fddf5203bfd45898f21e49c9fc9b83d291e22d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172368127617
1198759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e65923f5ca343c7ad1958ac0690ea3f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9038fb04ce7173166cb52181ceecd41cf82d733826ddf68ed5f5eb8894457506,PodSandboxId:a1cf7b7ef6f41616b120adf62166fb018ce255bc7069e3e0fda6f2086db0fa45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723681273710128815,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681273656731817,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681273612612251,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edee09d480aed745af29289f4e354836948af49f83b51332c70381c2589a7b70,PodSandboxId:e430c0bc26b2557fa2ba39cf57c7729ce11889df4d2da1c10d04e7f56489f12e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681273588332289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f93069ca-ff28-4449-b5de-41287635d04e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.232915252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f51b3ac1-1977-44a8-bfa0-b5d1e8b8f03b name=/runtime.v1.RuntimeService/Version
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.233015321Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f51b3ac1-1977-44a8-bfa0-b5d1e8b8f03b name=/runtime.v1.RuntimeService/Version
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.234314981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93df238c-5786-4e97-98b9-10c212d3bb43 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.234770432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681708234745826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93df238c-5786-4e97-98b9-10c212d3bb43 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.235359493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b41b5a7c-4556-4dbf-8452-ca561d48ab76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.235437685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b41b5a7c-4556-4dbf-8452-ca561d48ab76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.235719939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681438171701468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c05051caebc6b89e60379c49e52352cbd01e34ef4efe6f58a5441cb275e051d,PodSandboxId:e6e8146f29bde538c7ae23bcea4317033e3c3f8902a557af46925d5710c262bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723681299723187197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299671457880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299673848624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a9
89-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723681287926625552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172368128
4364979513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67611ae45f1e5eeda73fa4909e4ae85ff1de3ce19a810bf0cb7140feb5211759,PodSandboxId:77e4316165593ea75a453c19c9fddf5203bfd45898f21e49c9fc9b83d291e22d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172368127617
1198759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e65923f5ca343c7ad1958ac0690ea3f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9038fb04ce7173166cb52181ceecd41cf82d733826ddf68ed5f5eb8894457506,PodSandboxId:a1cf7b7ef6f41616b120adf62166fb018ce255bc7069e3e0fda6f2086db0fa45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723681273710128815,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681273656731817,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681273612612251,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edee09d480aed745af29289f4e354836948af49f83b51332c70381c2589a7b70,PodSandboxId:e430c0bc26b2557fa2ba39cf57c7729ce11889df4d2da1c10d04e7f56489f12e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681273588332289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b41b5a7c-4556-4dbf-8452-ca561d48ab76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.272414559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a15c338-eee6-4eaf-94ea-9574390a1319 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.272545141Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a15c338-eee6-4eaf-94ea-9574390a1319 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.276762169Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=698d0259-7cd5-4e66-9a96-68d365bfd678 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.277556194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681708277525556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=698d0259-7cd5-4e66-9a96-68d365bfd678 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.278550062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a482fe56-26ed-4425-a79b-c859425b6718 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.278623814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a482fe56-26ed-4425-a79b-c859425b6718 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.278860228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681438171701468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c05051caebc6b89e60379c49e52352cbd01e34ef4efe6f58a5441cb275e051d,PodSandboxId:e6e8146f29bde538c7ae23bcea4317033e3c3f8902a557af46925d5710c262bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723681299723187197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299671457880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299673848624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a9
89-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723681287926625552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172368128
4364979513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67611ae45f1e5eeda73fa4909e4ae85ff1de3ce19a810bf0cb7140feb5211759,PodSandboxId:77e4316165593ea75a453c19c9fddf5203bfd45898f21e49c9fc9b83d291e22d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172368127617
1198759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e65923f5ca343c7ad1958ac0690ea3f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9038fb04ce7173166cb52181ceecd41cf82d733826ddf68ed5f5eb8894457506,PodSandboxId:a1cf7b7ef6f41616b120adf62166fb018ce255bc7069e3e0fda6f2086db0fa45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723681273710128815,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681273656731817,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681273612612251,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edee09d480aed745af29289f4e354836948af49f83b51332c70381c2589a7b70,PodSandboxId:e430c0bc26b2557fa2ba39cf57c7729ce11889df4d2da1c10d04e7f56489f12e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681273588332289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a482fe56-26ed-4425-a79b-c859425b6718 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.326262379Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08237dd8-e83f-4826-899b-bacc9bfb1751 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.326335821Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08237dd8-e83f-4826-899b-bacc9bfb1751 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.327238468Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68651668-afbe-4ec8-9349-a70effed16f8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.327662391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681708327637617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68651668-afbe-4ec8-9349-a70effed16f8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.328067710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6790820-38c1-43ce-9449-67780be6b7cf name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.328120508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6790820-38c1-43ce-9449-67780be6b7cf name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:28:28 ha-863044 crio[681]: time="2024-08-15 00:28:28.328378083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681438171701468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c05051caebc6b89e60379c49e52352cbd01e34ef4efe6f58a5441cb275e051d,PodSandboxId:e6e8146f29bde538c7ae23bcea4317033e3c3f8902a557af46925d5710c262bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723681299723187197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299671457880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681299673848624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a9
89-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723681287926625552,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172368128
4364979513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67611ae45f1e5eeda73fa4909e4ae85ff1de3ce19a810bf0cb7140feb5211759,PodSandboxId:77e4316165593ea75a453c19c9fddf5203bfd45898f21e49c9fc9b83d291e22d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172368127617
1198759,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e65923f5ca343c7ad1958ac0690ea3f,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9038fb04ce7173166cb52181ceecd41cf82d733826ddf68ed5f5eb8894457506,PodSandboxId:a1cf7b7ef6f41616b120adf62166fb018ce255bc7069e3e0fda6f2086db0fa45,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723681273710128815,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681273656731817,Labels:map[string]s
tring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681273612612251,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edee09d480aed745af29289f4e354836948af49f83b51332c70381c2589a7b70,PodSandboxId:e430c0bc26b2557fa2ba39cf57c7729ce11889df4d2da1c10d04e7f56489f12e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681273588332289,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6790820-38c1-43ce-9449-67780be6b7cf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4a3e7281c498f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   e9555e65cebe7       busybox-7dff88458-ck6d9
	8c05051caebc6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   e6e8146f29bde       storage-provisioner
	770157c751290       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   1334a86739ccf       coredns-6f6b679f8f-bc2jh
	a6304cc907b70       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   4feecb19b205a       coredns-6f6b679f8f-jxpqd
	024782bd78877       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   c2b2f0c2bdc2e       kindnet-ptbpb
	5d1d7d03658b7       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   a6a3b389836fc       kube-proxy-758vr
	67611ae45f1e5       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   77e4316165593       kube-vip-ha-863044
	9038fb04ce717       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   a1cf7b7ef6f41       kube-controller-manager-ha-863044
	0624b371b469a       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   ba41c766be2d5       kube-scheduler-ha-863044
	acf9154524991       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   1825ea5e56cf4       etcd-ha-863044
	edee09d480aed       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   e430c0bc26b25       kube-apiserver-ha-863044
	
	
	==> coredns [770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e] <==
	[INFO] 10.244.0.4:45424 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003457281s
	[INFO] 10.244.0.4:44072 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000187168s
	[INFO] 10.244.2.2:55108 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149056s
	[INFO] 10.244.2.2:41293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000385323s
	[INFO] 10.244.2.2:38729 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145689s
	[INFO] 10.244.2.2:33124 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000292113s
	[INFO] 10.244.1.2:33531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000255406s
	[INFO] 10.244.1.2:51132 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001668147s
	[INFO] 10.244.1.2:42284 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114325s
	[INFO] 10.244.1.2:50113 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066268s
	[INFO] 10.244.1.2:52660 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013458s
	[INFO] 10.244.0.4:46269 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091339s
	[INFO] 10.244.0.4:59422 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042431s
	[INFO] 10.244.2.2:36516 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086546s
	[INFO] 10.244.1.2:57808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122743s
	[INFO] 10.244.1.2:32830 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116945s
	[INFO] 10.244.1.2:51392 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008307s
	[INFO] 10.244.0.4:42010 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00031726s
	[INFO] 10.244.2.2:44915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143127s
	[INFO] 10.244.2.2:37741 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170015s
	[INFO] 10.244.2.2:58647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130581s
	[INFO] 10.244.1.2:49418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247229s
	[INFO] 10.244.1.2:44042 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127451s
	[INFO] 10.244.1.2:41801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00015235s
	[INFO] 10.244.1.2:51078 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176731s
	
	
	==> coredns [a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787] <==
	[INFO] 10.244.0.4:45311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000930137s
	[INFO] 10.244.0.4:39922 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00305449s
	[INFO] 10.244.2.2:33332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115309s
	[INFO] 10.244.2.2:43902 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001291279s
	[INFO] 10.244.2.2:56904 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001340981s
	[INFO] 10.244.1.2:32926 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000109486s
	[INFO] 10.244.0.4:35014 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015446s
	[INFO] 10.244.0.4:46414 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148102s
	[INFO] 10.244.2.2:51282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002016555s
	[INFO] 10.244.2.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001529953s
	[INFO] 10.244.2.2:42863 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00043817s
	[INFO] 10.244.2.2:39074 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067798s
	[INFO] 10.244.1.2:52314 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192016s
	[INFO] 10.244.1.2:58476 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001116995s
	[INFO] 10.244.1.2:39360 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.001839118s
	[INFO] 10.244.0.4:51814 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012471s
	[INFO] 10.244.0.4:40547 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083981s
	[INFO] 10.244.2.2:34181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015996s
	[INFO] 10.244.2.2:56520 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000727856s
	[INFO] 10.244.2.2:38242 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103367s
	[INFO] 10.244.1.2:50032 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110327s
	[INFO] 10.244.0.4:55523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123577s
	[INFO] 10.244.0.4:42586 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010348s
	[INFO] 10.244.0.4:36103 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000184736s
	[INFO] 10.244.2.2:57332 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000163958s
	
	
	==> describe nodes <==
	Name:               ha-863044
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_21_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:21:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:28:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:24:23 +0000   Thu, 15 Aug 2024 00:21:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:24:23 +0000   Thu, 15 Aug 2024 00:21:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:24:23 +0000   Thu, 15 Aug 2024 00:21:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:24:23 +0000   Thu, 15 Aug 2024 00:21:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-863044
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e33f2588c28f4daf846273c46c5ec17c
	  System UUID:                e33f2588-c28f-4daf-8462-73c46c5ec17c
	  Boot ID:                    262603d0-6087-4822-8e6c-89d7a28279b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ck6d9              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 coredns-6f6b679f8f-bc2jh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m4s
	  kube-system                 coredns-6f6b679f8f-jxpqd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m4s
	  kube-system                 etcd-ha-863044                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m9s
	  kube-system                 kindnet-ptbpb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m5s
	  kube-system                 kube-apiserver-ha-863044             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-controller-manager-ha-863044    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-proxy-758vr                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	  kube-system                 kube-scheduler-ha-863044             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-vip-ha-863044                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m3s                   kube-proxy       
	  Normal  Starting                 7m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m15s (x7 over 7m16s)  kubelet          Node ha-863044 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m15s (x8 over 7m16s)  kubelet          Node ha-863044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m15s (x8 over 7m16s)  kubelet          Node ha-863044 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m9s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m8s                   kubelet          Node ha-863044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m8s                   kubelet          Node ha-863044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m8s                   kubelet          Node ha-863044 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m5s                   node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Normal  NodeReady                6m49s                  kubelet          Node ha-863044 status is now: NodeReady
	  Normal  RegisteredNode           6m4s                   node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	
	
	Name:               ha-863044-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_22_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:22:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:25:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 00:24:18 +0000   Thu, 15 Aug 2024 00:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 00:24:18 +0000   Thu, 15 Aug 2024 00:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 00:24:18 +0000   Thu, 15 Aug 2024 00:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 00:24:18 +0000   Thu, 15 Aug 2024 00:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    ha-863044-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 877b666314684accbfd657286f8d0095
	  System UUID:                877b6663-1468-4acc-bfd6-57286f8d0095
	  Boot ID:                    5a408699-89f8-44af-a389-c8beb5731e48
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zmr7b                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 etcd-ha-863044-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m11s
	  kube-system                 kindnet-xpnzd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m13s
	  kube-system                 kube-apiserver-ha-863044-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-controller-manager-ha-863044-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-proxy-6l4gp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-scheduler-ha-863044-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-vip-ha-863044-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m13s (x8 over 6m13s)  kubelet          Node ha-863044-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s (x8 over 6m13s)  kubelet          Node ha-863044-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s (x7 over 6m13s)  kubelet          Node ha-863044-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  RegisteredNode           6m4s                   node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  NodeNotReady             2m39s                  node-controller  Node ha-863044-m02 status is now: NodeNotReady
	
	
	Name:               ha-863044-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_23_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:23:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:28:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:24:27 +0000   Thu, 15 Aug 2024 00:23:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:24:27 +0000   Thu, 15 Aug 2024 00:23:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:24:27 +0000   Thu, 15 Aug 2024 00:23:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:24:27 +0000   Thu, 15 Aug 2024 00:23:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.30
	  Hostname:    ha-863044-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba0a91434394dddbc59d67dd539b2b7
	  System UUID:                bba0a914-3439-4ddd-bc59-d67dd539b2b7
	  Boot ID:                    ee412178-48eb-40cc-833e-05ae47d59349
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dpcjf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 etcd-ha-863044-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m1s
	  kube-system                 kindnet-jdl2d                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m3s
	  kube-system                 kube-apiserver-ha-863044-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-controller-manager-ha-863044-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-qxmqn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-scheduler-ha-863044-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-vip-ha-863044-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m3s (x8 over 5m3s)  kubelet          Node ha-863044-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m3s (x8 over 5m3s)  kubelet          Node ha-863044-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m3s (x7 over 5m3s)  kubelet          Node ha-863044-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m                   node-controller  Node ha-863044-m03 event: Registered Node ha-863044-m03 in Controller
	  Normal  RegisteredNode           4m59s                node-controller  Node ha-863044-m03 event: Registered Node ha-863044-m03 in Controller
	  Normal  RegisteredNode           4m54s                node-controller  Node ha-863044-m03 event: Registered Node ha-863044-m03 in Controller
	
	
	Name:               ha-863044-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_24_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:24:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:28:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:25:05 +0000   Thu, 15 Aug 2024 00:24:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:25:05 +0000   Thu, 15 Aug 2024 00:24:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:25:05 +0000   Thu, 15 Aug 2024 00:24:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:25:05 +0000   Thu, 15 Aug 2024 00:24:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    ha-863044-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 29de5816079a4aa6bb73571d88da2d1b
	  System UUID:                29de5816-079a-4aa6-bb73-571d88da2d1b
	  Boot ID:                    0cdcf6dc-9f15-484d-b8ad-776471728809
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7r4h2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-proxy-72j9n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x2 over 3m54s)  kubelet          Node ha-863044-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x2 over 3m54s)  kubelet          Node ha-863044-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x2 over 3m54s)  kubelet          Node ha-863044-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal  NodeReady                3m34s                  kubelet          Node ha-863044-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug15 00:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050133] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036788] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.709914] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.846087] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.586519] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug15 00:21] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.061023] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060159] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.174439] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.118153] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.259429] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +3.778855] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.212652] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.060600] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.151808] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.077604] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.187372] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.703882] kauditd_printk_skb: 23 callbacks suppressed
	[Aug15 00:22] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6] <==
	{"level":"warn","ts":"2024-08-15T00:28:28.381505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.480883Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.556021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.580904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.589628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.599830Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.604608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.613087Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.619845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.627185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.631721Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.662506Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.665673Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.673875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.681142Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.681804Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.688975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.692944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.696091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.699732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.706766Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.710248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:28:28.714189Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.170:2380/version","remote-member-id":"f7f22545c69cf70a","error":"Get \"https://192.168.39.170:2380/version\": dial tcp 192.168.39.170:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-15T00:28:28.714228Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f7f22545c69cf70a","error":"Get \"https://192.168.39.170:2380/version\": dial tcp 192.168.39.170:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-15T00:28:28.715638Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:28:28 up 7 min,  0 users,  load average: 0.14, 0.19, 0.11
	Linux ha-863044 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d] <==
	I0815 00:27:48.929364       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:27:58.923170       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:27:58.923294       1 main.go:299] handling current node
	I0815 00:27:58.923358       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:27:58.923378       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:27:58.923532       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:27:58.923554       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:27:58.923617       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:27:58.923635       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:28:08.924957       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:28:08.925169       1 main.go:299] handling current node
	I0815 00:28:08.925209       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:28:08.925233       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:28:08.925461       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:28:08.925501       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:28:08.925571       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:28:08.925590       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:28:18.925355       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:28:18.925384       1 main.go:299] handling current node
	I0815 00:28:18.925401       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:28:18.925406       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:28:18.925548       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:28:18.925583       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:28:18.925649       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:28:18.925667       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [edee09d480aed745af29289f4e354836948af49f83b51332c70381c2589a7b70] <==
	W0815 00:21:18.280898       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.6]
	I0815 00:21:18.281778       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 00:21:18.296853       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 00:21:18.615915       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 00:21:19.756501       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 00:21:19.773059       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0815 00:21:19.953993       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 00:21:23.865374       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0815 00:21:24.272117       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0815 00:23:58.977616       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40932: use of closed network connection
	E0815 00:23:59.158964       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40956: use of closed network connection
	E0815 00:23:59.332013       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40974: use of closed network connection
	E0815 00:23:59.509349       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40996: use of closed network connection
	E0815 00:23:59.691982       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41014: use of closed network connection
	E0815 00:23:59.884601       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41036: use of closed network connection
	E0815 00:24:00.048860       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41058: use of closed network connection
	E0815 00:24:00.219559       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41074: use of closed network connection
	E0815 00:24:00.393751       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41078: use of closed network connection
	E0815 00:24:00.676450       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41108: use of closed network connection
	E0815 00:24:00.835680       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41134: use of closed network connection
	E0815 00:24:01.016971       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41148: use of closed network connection
	E0815 00:24:01.193382       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41160: use of closed network connection
	E0815 00:24:01.359759       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41186: use of closed network connection
	E0815 00:24:01.527956       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41202: use of closed network connection
	W0815 00:25:28.294687       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.30 192.168.39.6]
	
	
	==> kube-controller-manager [9038fb04ce7173166cb52181ceecd41cf82d733826ddf68ed5f5eb8894457506] <==
	E0815 00:24:34.327823       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-vzkxz failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-vzkxz\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0815 00:24:34.728988       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-863044-m04\" does not exist"
	I0815 00:24:34.754572       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-863044-m04" podCIDRs=["10.244.3.0/24"]
	I0815 00:24:34.754634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:34.754669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:34.777657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:34.798636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:35.835650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:38.423648       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-863044-m04"
	I0815 00:24:38.482337       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:39.409299       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:39.442247       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:45.132358       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:54.253603       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-863044-m04"
	I0815 00:24:54.254109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:54.268789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:24:54.424933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:25:05.724012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:25:49.462565       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m02"
	I0815 00:25:49.462766       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-863044-m04"
	I0815 00:25:49.486494       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m02"
	I0815 00:25:49.548685       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.415198ms"
	I0815 00:25:49.549390       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.623µs"
	I0815 00:25:53.458494       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m02"
	I0815 00:25:54.720986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m02"
	
	
	==> kube-proxy [5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 00:21:24.752099       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 00:21:24.765176       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	E0815 00:21:24.765269       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:21:24.839381       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 00:21:24.839433       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 00:21:24.839463       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:21:24.843188       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:21:24.843505       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:21:24.843526       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:21:24.844946       1 config.go:197] "Starting service config controller"
	I0815 00:21:24.844961       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:21:24.844979       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:21:24.844992       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:21:24.845530       1 config.go:326] "Starting node config controller"
	I0815 00:21:24.845537       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:21:24.945136       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:21:24.945243       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:21:24.946555       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c] <==
	W0815 00:21:17.683275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 00:21:17.683323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:21:17.699374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 00:21:17.699429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:21:17.705353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:21:17.705448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:21:17.757345       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 00:21:17.757394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:21:17.813621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 00:21:17.813720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 00:21:17.870456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 00:21:17.870590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 00:21:19.967565       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 00:23:26.029190       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lcjxq\": pod kindnet-lcjxq is already assigned to node \"ha-863044-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-lcjxq" node="ha-863044-m03"
	E0815 00:23:26.029523       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 15a31f4d-5cbe-4ca9-b0fb-d0ce15a0d3b5(kube-system/kindnet-lcjxq) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lcjxq"
	E0815 00:23:26.029697       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lcjxq\": pod kindnet-lcjxq is already assigned to node \"ha-863044-m03\"" pod="kube-system/kindnet-lcjxq"
	I0815 00:23:26.029815       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lcjxq" node="ha-863044-m03"
	E0815 00:24:34.806628       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hhvjh\": pod kube-proxy-hhvjh is already assigned to node \"ha-863044-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hhvjh" node="ha-863044-m04"
	E0815 00:24:34.808667       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4fa2048e-40a6-4d67-9a16-e6d68caecb6b(kube-system/kube-proxy-hhvjh) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hhvjh"
	E0815 00:24:34.809740       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hhvjh\": pod kube-proxy-hhvjh is already assigned to node \"ha-863044-m04\"" pod="kube-system/kube-proxy-hhvjh"
	I0815 00:24:34.809950       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hhvjh" node="ha-863044-m04"
	E0815 00:24:34.844902       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5ptdm\": pod kube-proxy-5ptdm is already assigned to node \"ha-863044-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5ptdm" node="ha-863044-m04"
	E0815 00:24:34.845683       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5ac2ee81-5268-49b4-80fc-2b9950b30cad(kube-system/kube-proxy-5ptdm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5ptdm"
	E0815 00:24:34.845833       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5ptdm\": pod kube-proxy-5ptdm is already assigned to node \"ha-863044-m04\"" pod="kube-system/kube-proxy-5ptdm"
	I0815 00:24:34.845899       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5ptdm" node="ha-863044-m04"
	
	
	==> kubelet <==
	Aug 15 00:27:10 ha-863044 kubelet[1326]: E0815 00:27:10.004721    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681630004433052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:19 ha-863044 kubelet[1326]: E0815 00:27:19.906537    1326 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 00:27:19 ha-863044 kubelet[1326]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 00:27:19 ha-863044 kubelet[1326]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 00:27:19 ha-863044 kubelet[1326]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 00:27:19 ha-863044 kubelet[1326]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 00:27:20 ha-863044 kubelet[1326]: E0815 00:27:20.006242    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681640005738508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:20 ha-863044 kubelet[1326]: E0815 00:27:20.006265    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681640005738508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:30 ha-863044 kubelet[1326]: E0815 00:27:30.007732    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681650007139605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:30 ha-863044 kubelet[1326]: E0815 00:27:30.007797    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681650007139605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:40 ha-863044 kubelet[1326]: E0815 00:27:40.009098    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681660008636928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:40 ha-863044 kubelet[1326]: E0815 00:27:40.009426    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681660008636928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:50 ha-863044 kubelet[1326]: E0815 00:27:50.011868    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681670011591715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:27:50 ha-863044 kubelet[1326]: E0815 00:27:50.011900    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681670011591715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:28:00 ha-863044 kubelet[1326]: E0815 00:28:00.013507    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681680013252296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:28:00 ha-863044 kubelet[1326]: E0815 00:28:00.013891    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681680013252296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:28:10 ha-863044 kubelet[1326]: E0815 00:28:10.015419    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681690015116672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:28:10 ha-863044 kubelet[1326]: E0815 00:28:10.015454    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681690015116672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:28:19 ha-863044 kubelet[1326]: E0815 00:28:19.906700    1326 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 00:28:19 ha-863044 kubelet[1326]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 00:28:19 ha-863044 kubelet[1326]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 00:28:19 ha-863044 kubelet[1326]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 00:28:19 ha-863044 kubelet[1326]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 00:28:20 ha-863044 kubelet[1326]: E0815 00:28:20.016904    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681700016628159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:28:20 ha-863044 kubelet[1326]: E0815 00:28:20.016936    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723681700016628159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-863044 -n ha-863044
helpers_test.go:261: (dbg) Run:  kubectl --context ha-863044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (50.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (409.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-863044 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-863044 -v=7 --alsologtostderr
E0815 00:28:45.640436   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:29:41.523188   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:30:09.225088   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-863044 -v=7 --alsologtostderr: exit status 82 (2m1.798960712s)

                                                
                                                
-- stdout --
	* Stopping node "ha-863044-m04"  ...
	* Stopping node "ha-863044-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:28:30.100421   36443 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:28:30.100538   36443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:28:30.100547   36443 out.go:304] Setting ErrFile to fd 2...
	I0815 00:28:30.100552   36443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:28:30.100789   36443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:28:30.101014   36443 out.go:298] Setting JSON to false
	I0815 00:28:30.101112   36443 mustload.go:65] Loading cluster: ha-863044
	I0815 00:28:30.101455   36443 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:28:30.101576   36443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:28:30.101744   36443 mustload.go:65] Loading cluster: ha-863044
	I0815 00:28:30.101874   36443 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:28:30.101906   36443 stop.go:39] StopHost: ha-863044-m04
	I0815 00:28:30.102246   36443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:30.102289   36443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:30.117592   36443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36731
	I0815 00:28:30.118018   36443 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:30.118621   36443 main.go:141] libmachine: Using API Version  1
	I0815 00:28:30.118643   36443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:30.118953   36443 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:30.121452   36443 out.go:177] * Stopping node "ha-863044-m04"  ...
	I0815 00:28:30.122717   36443 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 00:28:30.122742   36443 main.go:141] libmachine: (ha-863044-m04) Calling .DriverName
	I0815 00:28:30.122937   36443 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 00:28:30.122958   36443 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHHostname
	I0815 00:28:30.125478   36443 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:30.125851   36443 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:24:15 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:28:30.125883   36443 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:28:30.125985   36443 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHPort
	I0815 00:28:30.126139   36443 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHKeyPath
	I0815 00:28:30.126272   36443 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHUsername
	I0815 00:28:30.126394   36443 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m04/id_rsa Username:docker}
	I0815 00:28:30.206602   36443 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 00:28:30.259435   36443 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 00:28:30.311670   36443 main.go:141] libmachine: Stopping "ha-863044-m04"...
	I0815 00:28:30.311710   36443 main.go:141] libmachine: (ha-863044-m04) Calling .GetState
	I0815 00:28:30.313525   36443 main.go:141] libmachine: (ha-863044-m04) Calling .Stop
	I0815 00:28:30.316997   36443 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 0/120
	I0815 00:28:31.453452   36443 main.go:141] libmachine: (ha-863044-m04) Calling .GetState
	I0815 00:28:31.454712   36443 main.go:141] libmachine: Machine "ha-863044-m04" was stopped.
	I0815 00:28:31.454726   36443 stop.go:75] duration metric: took 1.332013412s to stop
	I0815 00:28:31.454745   36443 stop.go:39] StopHost: ha-863044-m03
	I0815 00:28:31.455027   36443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:28:31.455065   36443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:28:31.470001   36443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0815 00:28:31.470402   36443 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:28:31.470835   36443 main.go:141] libmachine: Using API Version  1
	I0815 00:28:31.470854   36443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:28:31.471142   36443 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:28:31.472902   36443 out.go:177] * Stopping node "ha-863044-m03"  ...
	I0815 00:28:31.473973   36443 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 00:28:31.473994   36443 main.go:141] libmachine: (ha-863044-m03) Calling .DriverName
	I0815 00:28:31.474180   36443 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 00:28:31.474199   36443 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHHostname
	I0815 00:28:31.477087   36443 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:31.477509   36443 main.go:141] libmachine: (ha-863044-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:df:2b", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:22:55 +0000 UTC Type:0 Mac:52:54:00:5e:df:2b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-863044-m03 Clientid:01:52:54:00:5e:df:2b}
	I0815 00:28:31.477538   36443 main.go:141] libmachine: (ha-863044-m03) DBG | domain ha-863044-m03 has defined IP address 192.168.39.30 and MAC address 52:54:00:5e:df:2b in network mk-ha-863044
	I0815 00:28:31.477698   36443 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHPort
	I0815 00:28:31.477863   36443 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHKeyPath
	I0815 00:28:31.478006   36443 main.go:141] libmachine: (ha-863044-m03) Calling .GetSSHUsername
	I0815 00:28:31.478134   36443 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m03/id_rsa Username:docker}
	I0815 00:28:31.559428   36443 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 00:28:31.613436   36443 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 00:28:31.667301   36443 main.go:141] libmachine: Stopping "ha-863044-m03"...
	I0815 00:28:31.667339   36443 main.go:141] libmachine: (ha-863044-m03) Calling .GetState
	I0815 00:28:31.668896   36443 main.go:141] libmachine: (ha-863044-m03) Calling .Stop
	I0815 00:28:31.672020   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 0/120
	I0815 00:28:32.673365   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 1/120
	I0815 00:28:33.674721   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 2/120
	I0815 00:28:34.675967   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 3/120
	I0815 00:28:35.677283   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 4/120
	I0815 00:28:36.678878   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 5/120
	I0815 00:28:37.680293   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 6/120
	I0815 00:28:38.681632   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 7/120
	I0815 00:28:39.683062   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 8/120
	I0815 00:28:40.684405   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 9/120
	I0815 00:28:41.686446   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 10/120
	I0815 00:28:42.688064   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 11/120
	I0815 00:28:43.689453   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 12/120
	I0815 00:28:44.690773   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 13/120
	I0815 00:28:45.692110   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 14/120
	I0815 00:28:46.693822   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 15/120
	I0815 00:28:47.695490   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 16/120
	I0815 00:28:48.696984   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 17/120
	I0815 00:28:49.698581   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 18/120
	I0815 00:28:50.699946   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 19/120
	I0815 00:28:51.701927   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 20/120
	I0815 00:28:52.703640   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 21/120
	I0815 00:28:53.705199   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 22/120
	I0815 00:28:54.706928   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 23/120
	I0815 00:28:55.708440   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 24/120
	I0815 00:28:56.710107   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 25/120
	I0815 00:28:57.711672   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 26/120
	I0815 00:28:58.713050   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 27/120
	I0815 00:28:59.714510   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 28/120
	I0815 00:29:00.715990   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 29/120
	I0815 00:29:01.717773   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 30/120
	I0815 00:29:02.719263   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 31/120
	I0815 00:29:03.720745   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 32/120
	I0815 00:29:04.722224   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 33/120
	I0815 00:29:05.723353   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 34/120
	I0815 00:29:06.725168   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 35/120
	I0815 00:29:07.726374   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 36/120
	I0815 00:29:08.727453   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 37/120
	I0815 00:29:09.728747   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 38/120
	I0815 00:29:10.730186   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 39/120
	I0815 00:29:11.731959   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 40/120
	I0815 00:29:12.733353   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 41/120
	I0815 00:29:13.735131   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 42/120
	I0815 00:29:14.736297   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 43/120
	I0815 00:29:15.738915   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 44/120
	I0815 00:29:16.741187   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 45/120
	I0815 00:29:17.742577   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 46/120
	I0815 00:29:18.743897   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 47/120
	I0815 00:29:19.745543   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 48/120
	I0815 00:29:20.746747   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 49/120
	I0815 00:29:21.748338   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 50/120
	I0815 00:29:22.749700   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 51/120
	I0815 00:29:23.750930   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 52/120
	I0815 00:29:24.752436   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 53/120
	I0815 00:29:25.753712   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 54/120
	I0815 00:29:26.755636   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 55/120
	I0815 00:29:27.757275   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 56/120
	I0815 00:29:28.758658   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 57/120
	I0815 00:29:29.760048   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 58/120
	I0815 00:29:30.761446   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 59/120
	I0815 00:29:31.763201   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 60/120
	I0815 00:29:32.764720   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 61/120
	I0815 00:29:33.765996   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 62/120
	I0815 00:29:34.767235   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 63/120
	I0815 00:29:35.768862   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 64/120
	I0815 00:29:36.770529   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 65/120
	I0815 00:29:37.772327   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 66/120
	I0815 00:29:38.773868   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 67/120
	I0815 00:29:39.775427   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 68/120
	I0815 00:29:40.777186   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 69/120
	I0815 00:29:41.779197   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 70/120
	I0815 00:29:42.780512   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 71/120
	I0815 00:29:43.782066   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 72/120
	I0815 00:29:44.783386   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 73/120
	I0815 00:29:45.784790   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 74/120
	I0815 00:29:46.786797   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 75/120
	I0815 00:29:47.788705   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 76/120
	I0815 00:29:48.790066   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 77/120
	I0815 00:29:49.791353   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 78/120
	I0815 00:29:50.792719   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 79/120
	I0815 00:29:51.794796   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 80/120
	I0815 00:29:52.796104   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 81/120
	I0815 00:29:53.797525   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 82/120
	I0815 00:29:54.799000   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 83/120
	I0815 00:29:55.800233   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 84/120
	I0815 00:29:56.801983   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 85/120
	I0815 00:29:57.803357   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 86/120
	I0815 00:29:58.804724   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 87/120
	I0815 00:29:59.806016   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 88/120
	I0815 00:30:00.807283   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 89/120
	I0815 00:30:01.808886   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 90/120
	I0815 00:30:02.810210   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 91/120
	I0815 00:30:03.811531   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 92/120
	I0815 00:30:04.813156   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 93/120
	I0815 00:30:05.814494   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 94/120
	I0815 00:30:06.816311   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 95/120
	I0815 00:30:07.818844   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 96/120
	I0815 00:30:08.820385   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 97/120
	I0815 00:30:09.822167   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 98/120
	I0815 00:30:10.823341   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 99/120
	I0815 00:30:11.825558   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 100/120
	I0815 00:30:12.827027   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 101/120
	I0815 00:30:13.828590   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 102/120
	I0815 00:30:14.830092   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 103/120
	I0815 00:30:15.831502   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 104/120
	I0815 00:30:16.833366   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 105/120
	I0815 00:30:17.834862   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 106/120
	I0815 00:30:18.836235   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 107/120
	I0815 00:30:19.837498   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 108/120
	I0815 00:30:20.838804   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 109/120
	I0815 00:30:21.840705   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 110/120
	I0815 00:30:22.841973   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 111/120
	I0815 00:30:23.843677   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 112/120
	I0815 00:30:24.845073   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 113/120
	I0815 00:30:25.846566   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 114/120
	I0815 00:30:26.848260   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 115/120
	I0815 00:30:27.849515   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 116/120
	I0815 00:30:28.850663   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 117/120
	I0815 00:30:29.851871   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 118/120
	I0815 00:30:30.853243   36443 main.go:141] libmachine: (ha-863044-m03) Waiting for machine to stop 119/120
	I0815 00:30:31.853773   36443 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 00:30:31.853835   36443 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0815 00:30:31.855708   36443 out.go:177] 
	W0815 00:30:31.856873   36443 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0815 00:30:31.856883   36443 out.go:239] * 
	* 
	W0815 00:30:31.858970   36443 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 00:30:31.860148   36443 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-863044 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-863044 --wait=true -v=7 --alsologtostderr
E0815 00:33:45.640612   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:34:41.523130   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:35:08.705420   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-863044 --wait=true -v=7 --alsologtostderr: (4m44.655559079s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-863044
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-863044 -n ha-863044
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-863044 logs -n 25: (1.77169384s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m02:/home/docker/cp-test_ha-863044-m03_ha-863044-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m02 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m03_ha-863044-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04:/home/docker/cp-test_ha-863044-m03_ha-863044-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m04 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m03_ha-863044-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-863044 cp testdata/cp-test.txt                                                | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3188715365/001/cp-test_ha-863044-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044:/home/docker/cp-test_ha-863044-m04_ha-863044.txt                       |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044 sudo cat                                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m04_ha-863044.txt                                 |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m02:/home/docker/cp-test_ha-863044-m04_ha-863044-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m02 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m04_ha-863044-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03:/home/docker/cp-test_ha-863044-m04_ha-863044-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m03 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m04_ha-863044-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-863044 node stop m02 -v=7                                                     | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-863044 node start m02 -v=7                                                    | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-863044 -v=7                                                           | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-863044 -v=7                                                                | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-863044 --wait=true -v=7                                                    | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:30 UTC | 15 Aug 24 00:35 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-863044                                                                | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:35 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:30:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:30:31.903686   36932 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:30:31.903950   36932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:30:31.903960   36932 out.go:304] Setting ErrFile to fd 2...
	I0815 00:30:31.903964   36932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:30:31.904171   36932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:30:31.904801   36932 out.go:298] Setting JSON to false
	I0815 00:30:31.905736   36932 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4377,"bootTime":1723677455,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:30:31.905792   36932 start.go:139] virtualization: kvm guest
	I0815 00:30:31.908027   36932 out.go:177] * [ha-863044] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:30:31.909644   36932 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:30:31.909681   36932 notify.go:220] Checking for updates...
	I0815 00:30:31.911854   36932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:30:31.913063   36932 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:30:31.914116   36932 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:30:31.915176   36932 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:30:31.916374   36932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:30:31.918691   36932 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:30:31.918847   36932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:30:31.919456   36932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:30:31.919552   36932 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:30:31.934451   36932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0815 00:30:31.934857   36932 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:30:31.935364   36932 main.go:141] libmachine: Using API Version  1
	I0815 00:30:31.935393   36932 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:30:31.935742   36932 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:30:31.935937   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:30:31.970616   36932 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 00:30:31.971719   36932 start.go:297] selected driver: kvm2
	I0815 00:30:31.971736   36932 start.go:901] validating driver "kvm2" against &{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:30:31.971929   36932 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:30:31.972365   36932 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:30:31.972447   36932 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 00:30:31.986827   36932 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 00:30:31.987693   36932 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:30:31.987776   36932 cni.go:84] Creating CNI manager for ""
	I0815 00:30:31.987792   36932 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 00:30:31.987857   36932 start.go:340] cluster config:
	{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:30:31.988019   36932 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:30:31.989774   36932 out.go:177] * Starting "ha-863044" primary control-plane node in "ha-863044" cluster
	I0815 00:30:31.990950   36932 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:30:31.990977   36932 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:30:31.990988   36932 cache.go:56] Caching tarball of preloaded images
	I0815 00:30:31.991073   36932 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:30:31.991083   36932 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:30:31.991197   36932 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:30:31.991397   36932 start.go:360] acquireMachinesLock for ha-863044: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 00:30:31.991437   36932 start.go:364] duration metric: took 22.004µs to acquireMachinesLock for "ha-863044"
	I0815 00:30:31.991454   36932 start.go:96] Skipping create...Using existing machine configuration
	I0815 00:30:31.991467   36932 fix.go:54] fixHost starting: 
	I0815 00:30:31.991753   36932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:30:31.991783   36932 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:30:32.005880   36932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38359
	I0815 00:30:32.006307   36932 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:30:32.006776   36932 main.go:141] libmachine: Using API Version  1
	I0815 00:30:32.006794   36932 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:30:32.007082   36932 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:30:32.007274   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:30:32.007473   36932 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:30:32.009035   36932 fix.go:112] recreateIfNeeded on ha-863044: state=Running err=<nil>
	W0815 00:30:32.009079   36932 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 00:30:32.010821   36932 out.go:177] * Updating the running kvm2 "ha-863044" VM ...
	I0815 00:30:32.011867   36932 machine.go:94] provisionDockerMachine start ...
	I0815 00:30:32.011882   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:30:32.012057   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:30:32.014453   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.014951   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.014982   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.015103   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:30:32.015257   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.015405   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.015530   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:30:32.015670   36932 main.go:141] libmachine: Using SSH client type: native
	I0815 00:30:32.015841   36932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:30:32.015852   36932 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 00:30:32.133402   36932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863044
	
	I0815 00:30:32.133428   36932 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:30:32.133620   36932 buildroot.go:166] provisioning hostname "ha-863044"
	I0815 00:30:32.133642   36932 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:30:32.133865   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:30:32.136403   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.136773   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.136793   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.136938   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:30:32.137104   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.137237   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.137343   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:30:32.137484   36932 main.go:141] libmachine: Using SSH client type: native
	I0815 00:30:32.137707   36932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:30:32.137721   36932 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863044 && echo "ha-863044" | sudo tee /etc/hostname
	I0815 00:30:32.263649   36932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863044
	
	I0815 00:30:32.263697   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:30:32.266461   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.266806   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.266843   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.267048   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:30:32.267236   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.267380   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.267526   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:30:32.267683   36932 main.go:141] libmachine: Using SSH client type: native
	I0815 00:30:32.267900   36932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:30:32.267918   36932 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863044/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:30:32.381276   36932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:30:32.381306   36932 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 00:30:32.381322   36932 buildroot.go:174] setting up certificates
	I0815 00:30:32.381330   36932 provision.go:84] configureAuth start
	I0815 00:30:32.381338   36932 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:30:32.381593   36932 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:30:32.384132   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.384510   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.384560   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.384703   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:30:32.386857   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.387158   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.387181   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.387317   36932 provision.go:143] copyHostCerts
	I0815 00:30:32.387352   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:30:32.387381   36932 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 00:30:32.387402   36932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:30:32.387472   36932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 00:30:32.387576   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:30:32.387602   36932 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 00:30:32.387611   36932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:30:32.387640   36932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 00:30:32.387712   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:30:32.387734   36932 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 00:30:32.387741   36932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:30:32.387774   36932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 00:30:32.387851   36932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.ha-863044 san=[127.0.0.1 192.168.39.6 ha-863044 localhost minikube]
	I0815 00:30:32.651004   36932 provision.go:177] copyRemoteCerts
	I0815 00:30:32.651063   36932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:30:32.651085   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:30:32.653549   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.653855   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.653877   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.654066   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:30:32.654264   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.654429   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:30:32.654568   36932 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:30:32.743399   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 00:30:32.743464   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:30:32.767752   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 00:30:32.767807   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0815 00:30:32.790338   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 00:30:32.790408   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 00:30:32.812132   36932 provision.go:87] duration metric: took 430.790925ms to configureAuth
	I0815 00:30:32.812155   36932 buildroot.go:189] setting minikube options for container-runtime
	I0815 00:30:32.812423   36932 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:30:32.812508   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:30:32.814896   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.815192   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.815217   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.815377   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:30:32.815554   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.815706   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.815828   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:30:32.815964   36932 main.go:141] libmachine: Using SSH client type: native
	I0815 00:30:32.816547   36932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:30:32.816588   36932 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:32:03.536768   36932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:32:03.536795   36932 machine.go:97] duration metric: took 1m31.524917765s to provisionDockerMachine
	I0815 00:32:03.536808   36932 start.go:293] postStartSetup for "ha-863044" (driver="kvm2")
	I0815 00:32:03.536817   36932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:32:03.536835   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:32:03.537246   36932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:32:03.537308   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:32:03.540326   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.540767   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:03.540789   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.540946   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:32:03.541122   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:32:03.541260   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:32:03.541425   36932 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:32:03.626432   36932 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:32:03.630404   36932 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 00:32:03.630426   36932 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 00:32:03.630492   36932 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 00:32:03.630584   36932 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 00:32:03.630596   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /etc/ssl/certs/202792.pem
	I0815 00:32:03.630678   36932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 00:32:03.639429   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:32:03.661531   36932 start.go:296] duration metric: took 124.713732ms for postStartSetup
	I0815 00:32:03.661561   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:32:03.661832   36932 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 00:32:03.661853   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:32:03.664330   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.664716   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:03.664741   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.664899   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:32:03.665061   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:32:03.665170   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:32:03.665331   36932 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	W0815 00:32:03.750303   36932 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0815 00:32:03.750342   36932 fix.go:56] duration metric: took 1m31.758877355s for fixHost
	I0815 00:32:03.750369   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:32:03.753013   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.753382   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:03.753423   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.753551   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:32:03.753735   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:32:03.753900   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:32:03.754030   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:32:03.754174   36932 main.go:141] libmachine: Using SSH client type: native
	I0815 00:32:03.754331   36932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:32:03.754341   36932 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 00:32:03.864995   36932 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723681923.822964831
	
	I0815 00:32:03.865016   36932 fix.go:216] guest clock: 1723681923.822964831
	I0815 00:32:03.865025   36932 fix.go:229] Guest: 2024-08-15 00:32:03.822964831 +0000 UTC Remote: 2024-08-15 00:32:03.750352164 +0000 UTC m=+91.881317148 (delta=72.612667ms)
	I0815 00:32:03.865058   36932 fix.go:200] guest clock delta is within tolerance: 72.612667ms
	I0815 00:32:03.865065   36932 start.go:83] releasing machines lock for "ha-863044", held for 1m31.873618392s
	I0815 00:32:03.865086   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:32:03.865324   36932 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:32:03.867802   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.868158   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:03.868178   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.868431   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:32:03.868909   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:32:03.869121   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:32:03.869214   36932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:32:03.869267   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:32:03.869303   36932 ssh_runner.go:195] Run: cat /version.json
	I0815 00:32:03.869344   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:32:03.872062   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.872332   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.872430   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:03.872445   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.872632   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:32:03.872782   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:03.872788   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:32:03.872832   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.872927   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:32:03.872973   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:32:03.873147   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:32:03.873145   36932 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:32:03.873299   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:32:03.873455   36932 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:32:03.953938   36932 ssh_runner.go:195] Run: systemctl --version
	I0815 00:32:03.991871   36932 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:32:04.150179   36932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 00:32:04.157986   36932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 00:32:04.158038   36932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:32:04.167393   36932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 00:32:04.167408   36932 start.go:495] detecting cgroup driver to use...
	I0815 00:32:04.167461   36932 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:32:04.182320   36932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:32:04.195393   36932 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:32:04.195451   36932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:32:04.208315   36932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:32:04.221357   36932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:32:04.373383   36932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:32:04.508996   36932 docker.go:233] disabling docker service ...
	I0815 00:32:04.509055   36932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:32:04.524585   36932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:32:04.537086   36932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:32:04.675146   36932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:32:04.814822   36932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:32:04.828531   36932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:32:04.846650   36932 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:32:04.846700   36932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.856294   36932 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:32:04.856361   36932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.865713   36932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.875231   36932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.884442   36932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:32:04.893879   36932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.903356   36932 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.913565   36932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.923036   36932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:32:04.931669   36932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:32:04.940052   36932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:32:05.076029   36932 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:32:10.340622   36932 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.264557078s)
	I0815 00:32:10.340668   36932 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:32:10.340719   36932 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:32:10.345299   36932 start.go:563] Will wait 60s for crictl version
	I0815 00:32:10.345360   36932 ssh_runner.go:195] Run: which crictl
	I0815 00:32:10.348823   36932 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:32:10.384620   36932 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 00:32:10.384719   36932 ssh_runner.go:195] Run: crio --version
	I0815 00:32:10.411968   36932 ssh_runner.go:195] Run: crio --version
	I0815 00:32:10.439961   36932 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 00:32:10.441308   36932 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:32:10.443992   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:10.444317   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:10.444344   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:10.444497   36932 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 00:32:10.448802   36932 kubeadm.go:883] updating cluster {Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:32:10.448925   36932 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:32:10.448962   36932 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:32:10.490857   36932 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:32:10.490877   36932 crio.go:433] Images already preloaded, skipping extraction
	I0815 00:32:10.490925   36932 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:32:10.525922   36932 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:32:10.525943   36932 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:32:10.525952   36932 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.31.0 crio true true} ...
	I0815 00:32:10.526072   36932 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863044 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:32:10.526143   36932 ssh_runner.go:195] Run: crio config
	I0815 00:32:10.573554   36932 cni.go:84] Creating CNI manager for ""
	I0815 00:32:10.573579   36932 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 00:32:10.573593   36932 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:32:10.573616   36932 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-863044 NodeName:ha-863044 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:32:10.573732   36932 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-863044"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:32:10.573750   36932 kube-vip.go:115] generating kube-vip config ...
	I0815 00:32:10.573792   36932 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 00:32:10.584741   36932 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 00:32:10.584847   36932 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 00:32:10.584896   36932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:32:10.593992   36932 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:32:10.594077   36932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 00:32:10.602936   36932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0815 00:32:10.617867   36932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:32:10.632241   36932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0815 00:32:10.646720   36932 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 00:32:10.663467   36932 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 00:32:10.666825   36932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:32:10.811586   36932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:32:10.825456   36932 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044 for IP: 192.168.39.6
	I0815 00:32:10.825480   36932 certs.go:194] generating shared ca certs ...
	I0815 00:32:10.825499   36932 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:32:10.825664   36932 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 00:32:10.825714   36932 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 00:32:10.825727   36932 certs.go:256] generating profile certs ...
	I0815 00:32:10.825797   36932 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key
	I0815 00:32:10.825822   36932 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.22de4ae5
	I0815 00:32:10.825835   36932 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.22de4ae5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.170 192.168.39.30 192.168.39.254]
	I0815 00:32:10.864688   36932 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.22de4ae5 ...
	I0815 00:32:10.864711   36932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.22de4ae5: {Name:mkdbcfe42d6893282928e12ceebcc8caaa6002b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:32:10.864882   36932 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.22de4ae5 ...
	I0815 00:32:10.864896   36932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.22de4ae5: {Name:mk824d7809eacb3e171a3c693b9456bc31a3f949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:32:10.864990   36932 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.22de4ae5 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt
	I0815 00:32:10.865137   36932 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.22de4ae5 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key
	I0815 00:32:10.865256   36932 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key
	I0815 00:32:10.865271   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 00:32:10.865283   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 00:32:10.865298   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 00:32:10.865317   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 00:32:10.865331   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 00:32:10.865344   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 00:32:10.865355   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 00:32:10.865365   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 00:32:10.865413   36932 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 00:32:10.865440   36932 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 00:32:10.865448   36932 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:32:10.865468   36932 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:32:10.865494   36932 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:32:10.865517   36932 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 00:32:10.865560   36932 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:32:10.865586   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /usr/share/ca-certificates/202792.pem
	I0815 00:32:10.865599   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:32:10.865611   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem -> /usr/share/ca-certificates/20279.pem
	I0815 00:32:10.866106   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:32:10.889903   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:32:10.911743   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:32:10.933524   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:32:10.955253   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 00:32:10.977052   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:32:10.997985   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:32:11.019368   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 00:32:11.040875   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 00:32:11.062748   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:32:11.084255   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 00:32:11.105980   36932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:32:11.121661   36932 ssh_runner.go:195] Run: openssl version
	I0815 00:32:11.127336   36932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:32:11.137621   36932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:32:11.141739   36932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:32:11.141781   36932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:32:11.146832   36932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:32:11.155204   36932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 00:32:11.165175   36932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 00:32:11.169451   36932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 00:32:11.169488   36932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 00:32:11.174916   36932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 00:32:11.184114   36932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 00:32:11.194137   36932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 00:32:11.198060   36932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 00:32:11.198105   36932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 00:32:11.203376   36932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 00:32:11.211994   36932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:32:11.216929   36932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 00:32:11.221935   36932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 00:32:11.226918   36932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 00:32:11.231711   36932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 00:32:11.236723   36932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 00:32:11.241516   36932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 00:32:11.246424   36932 kubeadm.go:392] StartCluster: {Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:32:11.246540   36932 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 00:32:11.246572   36932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:32:11.281507   36932 cri.go:89] found id: "70696e8054023a651ead462ee31f548db94a8e40db8de76ffdf0e07ffc0839ea"
	I0815 00:32:11.281532   36932 cri.go:89] found id: "2837226a2ab92bec8f7f4be4c0f337b9b8b447569eb9df6783bda26a2c05653f"
	I0815 00:32:11.281538   36932 cri.go:89] found id: "c2e348136dca92210b1f249cc3d0bb46d0d1515f55819c3b11ba9e9f7cfe92f4"
	I0815 00:32:11.281543   36932 cri.go:89] found id: "8c05051caebc6b89e60379c49e52352cbd01e34ef4efe6f58a5441cb275e051d"
	I0815 00:32:11.281547   36932 cri.go:89] found id: "770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e"
	I0815 00:32:11.281551   36932 cri.go:89] found id: "a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787"
	I0815 00:32:11.281555   36932 cri.go:89] found id: "024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d"
	I0815 00:32:11.281559   36932 cri.go:89] found id: "5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a"
	I0815 00:32:11.281565   36932 cri.go:89] found id: "67611ae45f1e5eeda73fa4909e4ae85ff1de3ce19a810bf0cb7140feb5211759"
	I0815 00:32:11.281570   36932 cri.go:89] found id: "9038fb04ce7173166cb52181ceecd41cf82d733826ddf68ed5f5eb8894457506"
	I0815 00:32:11.281572   36932 cri.go:89] found id: "0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c"
	I0815 00:32:11.281575   36932 cri.go:89] found id: "acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6"
	I0815 00:32:11.281578   36932 cri.go:89] found id: "edee09d480aed745af29289f4e354836948af49f83b51332c70381c2589a7b70"
	I0815 00:32:11.281580   36932 cri.go:89] found id: ""
	I0815 00:32:11.281629   36932 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.198757649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682117198728351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73f46f9c-0472-4b1e-b70c-00c77ff94188 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.199560204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b21b8933-9cbf-41bf-bee1-b5e6d85d61ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.199618358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b21b8933-9cbf-41bf-bee1-b5e6d85d61ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.200014045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9242e96323c42cc5e35660dcbe3a5002d7d84faf37ddf6f152ba368e4b862709,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723682006890359382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7481189cf3801cc3c33a3eb3a11315b91f505f5119b9fded6d4fb163acec80fe,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723682003890648308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f92aa390854b4fe628e75613f1124beebe9adb2ded49dc3bc7b7f04ab6ad5cff,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681979899555064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5abc65d96ceafde2e73c26d5fe6548d3cd03610876fe573fe4b87e4c1eb74f,PodSandboxId:e02cfeea63eea6f82041a8e0e2a96cdea6d66e2dd5ed5f1f3d3e542ac853dcba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681971187232122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0955874b3483b218b53b75431581f070ae0a22230f550a7d8b78775608b5558a,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723681970325487212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb896d8cead143c23b375f754c0df0f9b3613bf005b323b1efc46257a60549b4,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723681965892289607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befdd0eb67c53f24058fc53346cd9b481e43da723f18c3ed0b5725c9c55368cc,PodSandboxId:0e0553bced5b326cf3fb45c6a36a15065834133bc3b5d7449f8c609e70d3e159,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723681951926987770,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed71e11d0913ce366e9aa90e4e79fd10,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d908dbe9fbecf3439554cdfd533fbd8edc65fd0fc302dafafd14e7584f88a73,PodSandboxId:5a637d5a9638e7e9025579b62bd36a6f2f2a5d82648f27b851890b3397c6cf89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723681937991517664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:7eb4acac741dc891d1b4d79b3df6a6ad843a76de4536ee11e93532fd02f87049,PodSandboxId:c0102ecec2d13b28ef5fdff97b9c4bd6734a9ec5afdb2d4bb1232e96130469ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681938128475052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a78aff1a6bd80d12b09da54ca90018fb8d7a3d1dc39978646568195d876a17f,PodSandboxId:4f07d87d4b08c441f15163c53c7791067be05f69c669ad3953e26274fc256eb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723681937984205907,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e036bda4ed25a5465915f56b707b3d01dc5d8fb9d6660380dd74454f867eba0,PodSandboxId:3893e58de9eec882187cae2ab509f06e9c057ef336334f6c3a84614b10a3bc3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681937842358865,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5b4659b9ea138e22072962382d618ca8b5f50e46861131601f65a468f1ec69,PodSandboxId:60903fdbb380e131b5a890580220c3fbf0fc099fc095f6ca82d54b4c00214360,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681937745561713,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0cce4b13205c74f2414baaf67c2da86c94f4e00b516df95cf6c2777cdccfed,PodSandboxId:a021f35eb00b10aac8b23be25fa1856dd0bccd781f9bea329a0b0c4de5770beb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681937692868128,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db12904
56ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3177b3c6875f29527f81c74a5d3bc9b56b139cf1917c0375badeed94ad13304f,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723681937609506021,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723681438171808367,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299671795909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299673907846,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723681287926704791,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723681284364996588,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723681273657642816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723681273612784551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b21b8933-9cbf-41bf-bee1-b5e6d85d61ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.248582825Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8eb54647-2c78-4b3a-ae66-70822a3c62e9 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.248657928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8eb54647-2c78-4b3a-ae66-70822a3c62e9 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.250130196Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eccfcfdf-0b58-4b46-a718-e4c532e3c762 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.250597588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682117250571762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eccfcfdf-0b58-4b46-a718-e4c532e3c762 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.251250860Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b577c0c6-4fe5-413e-85bf-52ccb1c72e58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.251306082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b577c0c6-4fe5-413e-85bf-52ccb1c72e58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.251704828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9242e96323c42cc5e35660dcbe3a5002d7d84faf37ddf6f152ba368e4b862709,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723682006890359382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7481189cf3801cc3c33a3eb3a11315b91f505f5119b9fded6d4fb163acec80fe,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723682003890648308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f92aa390854b4fe628e75613f1124beebe9adb2ded49dc3bc7b7f04ab6ad5cff,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681979899555064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5abc65d96ceafde2e73c26d5fe6548d3cd03610876fe573fe4b87e4c1eb74f,PodSandboxId:e02cfeea63eea6f82041a8e0e2a96cdea6d66e2dd5ed5f1f3d3e542ac853dcba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681971187232122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0955874b3483b218b53b75431581f070ae0a22230f550a7d8b78775608b5558a,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723681970325487212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb896d8cead143c23b375f754c0df0f9b3613bf005b323b1efc46257a60549b4,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723681965892289607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befdd0eb67c53f24058fc53346cd9b481e43da723f18c3ed0b5725c9c55368cc,PodSandboxId:0e0553bced5b326cf3fb45c6a36a15065834133bc3b5d7449f8c609e70d3e159,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723681951926987770,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed71e11d0913ce366e9aa90e4e79fd10,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d908dbe9fbecf3439554cdfd533fbd8edc65fd0fc302dafafd14e7584f88a73,PodSandboxId:5a637d5a9638e7e9025579b62bd36a6f2f2a5d82648f27b851890b3397c6cf89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723681937991517664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:7eb4acac741dc891d1b4d79b3df6a6ad843a76de4536ee11e93532fd02f87049,PodSandboxId:c0102ecec2d13b28ef5fdff97b9c4bd6734a9ec5afdb2d4bb1232e96130469ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681938128475052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a78aff1a6bd80d12b09da54ca90018fb8d7a3d1dc39978646568195d876a17f,PodSandboxId:4f07d87d4b08c441f15163c53c7791067be05f69c669ad3953e26274fc256eb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723681937984205907,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e036bda4ed25a5465915f56b707b3d01dc5d8fb9d6660380dd74454f867eba0,PodSandboxId:3893e58de9eec882187cae2ab509f06e9c057ef336334f6c3a84614b10a3bc3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681937842358865,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5b4659b9ea138e22072962382d618ca8b5f50e46861131601f65a468f1ec69,PodSandboxId:60903fdbb380e131b5a890580220c3fbf0fc099fc095f6ca82d54b4c00214360,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681937745561713,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0cce4b13205c74f2414baaf67c2da86c94f4e00b516df95cf6c2777cdccfed,PodSandboxId:a021f35eb00b10aac8b23be25fa1856dd0bccd781f9bea329a0b0c4de5770beb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681937692868128,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db12904
56ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3177b3c6875f29527f81c74a5d3bc9b56b139cf1917c0375badeed94ad13304f,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723681937609506021,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723681438171808367,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299671795909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299673907846,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723681287926704791,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723681284364996588,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723681273657642816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723681273612784551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b577c0c6-4fe5-413e-85bf-52ccb1c72e58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.290403968Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f802585b-8b3c-4330-8ff8-110287907b3c name=/runtime.v1.RuntimeService/Version
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.290481061Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f802585b-8b3c-4330-8ff8-110287907b3c name=/runtime.v1.RuntimeService/Version
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.291393585Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=679844c0-c3f0-4925-aea4-a79ce01843d9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.291813741Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682117291792731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=679844c0-c3f0-4925-aea4-a79ce01843d9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.292522670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fe164fb-a41b-4763-939c-8809ed4791de name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.292586874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fe164fb-a41b-4763-939c-8809ed4791de name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.292973854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9242e96323c42cc5e35660dcbe3a5002d7d84faf37ddf6f152ba368e4b862709,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723682006890359382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7481189cf3801cc3c33a3eb3a11315b91f505f5119b9fded6d4fb163acec80fe,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723682003890648308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f92aa390854b4fe628e75613f1124beebe9adb2ded49dc3bc7b7f04ab6ad5cff,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681979899555064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5abc65d96ceafde2e73c26d5fe6548d3cd03610876fe573fe4b87e4c1eb74f,PodSandboxId:e02cfeea63eea6f82041a8e0e2a96cdea6d66e2dd5ed5f1f3d3e542ac853dcba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681971187232122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0955874b3483b218b53b75431581f070ae0a22230f550a7d8b78775608b5558a,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723681970325487212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb896d8cead143c23b375f754c0df0f9b3613bf005b323b1efc46257a60549b4,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723681965892289607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befdd0eb67c53f24058fc53346cd9b481e43da723f18c3ed0b5725c9c55368cc,PodSandboxId:0e0553bced5b326cf3fb45c6a36a15065834133bc3b5d7449f8c609e70d3e159,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723681951926987770,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed71e11d0913ce366e9aa90e4e79fd10,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d908dbe9fbecf3439554cdfd533fbd8edc65fd0fc302dafafd14e7584f88a73,PodSandboxId:5a637d5a9638e7e9025579b62bd36a6f2f2a5d82648f27b851890b3397c6cf89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723681937991517664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:7eb4acac741dc891d1b4d79b3df6a6ad843a76de4536ee11e93532fd02f87049,PodSandboxId:c0102ecec2d13b28ef5fdff97b9c4bd6734a9ec5afdb2d4bb1232e96130469ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681938128475052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a78aff1a6bd80d12b09da54ca90018fb8d7a3d1dc39978646568195d876a17f,PodSandboxId:4f07d87d4b08c441f15163c53c7791067be05f69c669ad3953e26274fc256eb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723681937984205907,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e036bda4ed25a5465915f56b707b3d01dc5d8fb9d6660380dd74454f867eba0,PodSandboxId:3893e58de9eec882187cae2ab509f06e9c057ef336334f6c3a84614b10a3bc3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681937842358865,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5b4659b9ea138e22072962382d618ca8b5f50e46861131601f65a468f1ec69,PodSandboxId:60903fdbb380e131b5a890580220c3fbf0fc099fc095f6ca82d54b4c00214360,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681937745561713,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0cce4b13205c74f2414baaf67c2da86c94f4e00b516df95cf6c2777cdccfed,PodSandboxId:a021f35eb00b10aac8b23be25fa1856dd0bccd781f9bea329a0b0c4de5770beb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681937692868128,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db12904
56ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3177b3c6875f29527f81c74a5d3bc9b56b139cf1917c0375badeed94ad13304f,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723681937609506021,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723681438171808367,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299671795909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299673907846,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723681287926704791,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723681284364996588,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723681273657642816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723681273612784551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fe164fb-a41b-4763-939c-8809ed4791de name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.336728928Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13bc01ec-c33d-4363-aaaf-a72d33e2a314 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.336935992Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13bc01ec-c33d-4363-aaaf-a72d33e2a314 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.338168311Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ea04ef2-01b6-4e93-a15c-ea4b78d06198 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.338657322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682117338632917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ea04ef2-01b6-4e93-a15c-ea4b78d06198 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.339366063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ca691ce-2f80-4d06-8f6f-b86a7c3e0e83 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.339421068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ca691ce-2f80-4d06-8f6f-b86a7c3e0e83 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:35:17 ha-863044 crio[3583]: time="2024-08-15 00:35:17.339814333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9242e96323c42cc5e35660dcbe3a5002d7d84faf37ddf6f152ba368e4b862709,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723682006890359382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7481189cf3801cc3c33a3eb3a11315b91f505f5119b9fded6d4fb163acec80fe,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723682003890648308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f92aa390854b4fe628e75613f1124beebe9adb2ded49dc3bc7b7f04ab6ad5cff,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681979899555064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5abc65d96ceafde2e73c26d5fe6548d3cd03610876fe573fe4b87e4c1eb74f,PodSandboxId:e02cfeea63eea6f82041a8e0e2a96cdea6d66e2dd5ed5f1f3d3e542ac853dcba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681971187232122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0955874b3483b218b53b75431581f070ae0a22230f550a7d8b78775608b5558a,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723681970325487212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb896d8cead143c23b375f754c0df0f9b3613bf005b323b1efc46257a60549b4,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723681965892289607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befdd0eb67c53f24058fc53346cd9b481e43da723f18c3ed0b5725c9c55368cc,PodSandboxId:0e0553bced5b326cf3fb45c6a36a15065834133bc3b5d7449f8c609e70d3e159,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723681951926987770,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed71e11d0913ce366e9aa90e4e79fd10,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d908dbe9fbecf3439554cdfd533fbd8edc65fd0fc302dafafd14e7584f88a73,PodSandboxId:5a637d5a9638e7e9025579b62bd36a6f2f2a5d82648f27b851890b3397c6cf89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723681937991517664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:7eb4acac741dc891d1b4d79b3df6a6ad843a76de4536ee11e93532fd02f87049,PodSandboxId:c0102ecec2d13b28ef5fdff97b9c4bd6734a9ec5afdb2d4bb1232e96130469ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681938128475052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a78aff1a6bd80d12b09da54ca90018fb8d7a3d1dc39978646568195d876a17f,PodSandboxId:4f07d87d4b08c441f15163c53c7791067be05f69c669ad3953e26274fc256eb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723681937984205907,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e036bda4ed25a5465915f56b707b3d01dc5d8fb9d6660380dd74454f867eba0,PodSandboxId:3893e58de9eec882187cae2ab509f06e9c057ef336334f6c3a84614b10a3bc3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681937842358865,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5b4659b9ea138e22072962382d618ca8b5f50e46861131601f65a468f1ec69,PodSandboxId:60903fdbb380e131b5a890580220c3fbf0fc099fc095f6ca82d54b4c00214360,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681937745561713,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0cce4b13205c74f2414baaf67c2da86c94f4e00b516df95cf6c2777cdccfed,PodSandboxId:a021f35eb00b10aac8b23be25fa1856dd0bccd781f9bea329a0b0c4de5770beb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681937692868128,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db12904
56ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3177b3c6875f29527f81c74a5d3bc9b56b139cf1917c0375badeed94ad13304f,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723681937609506021,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723681438171808367,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299671795909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299673907846,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723681287926704791,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723681284364996588,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723681273657642816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723681273612784551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ca691ce-2f80-4d06-8f6f-b86a7c3e0e83 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9242e96323c42       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   8c8c0152a76d4       storage-provisioner
	7481189cf3801       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   3                   d28dd79bb029e       kube-controller-manager-ha-863044
	f92aa390854b4       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Running             kube-apiserver            3                   b3ae4347b75ec       kube-apiserver-ha-863044
	9e5abc65d96ce       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   e02cfeea63eea       busybox-7dff88458-ck6d9
	0955874b3483b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   2                   d28dd79bb029e       kube-controller-manager-ha-863044
	eb896d8cead14       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   8c8c0152a76d4       storage-provisioner
	befdd0eb67c53       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   0e0553bced5b3       kube-vip-ha-863044
	7eb4acac741dc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   c0102ecec2d13       coredns-6f6b679f8f-jxpqd
	1d908dbe9fbec       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   5a637d5a9638e       kube-proxy-758vr
	5a78aff1a6bd8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   4f07d87d4b08c       kindnet-ptbpb
	8e036bda4ed25       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   3893e58de9eec       coredns-6f6b679f8f-bc2jh
	af5b4659b9ea1       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   60903fdbb380e       kube-scheduler-ha-863044
	dc0cce4b13205       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   a021f35eb00b1       etcd-ha-863044
	3177b3c6875f2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   b3ae4347b75ec       kube-apiserver-ha-863044
	4a3e7281c498f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   e9555e65cebe7       busybox-7dff88458-ck6d9
	770157c751290       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   1334a86739ccf       coredns-6f6b679f8f-bc2jh
	a6304cc907b70       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   4feecb19b205a       coredns-6f6b679f8f-jxpqd
	024782bd78877       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    13 minutes ago       Exited              kindnet-cni               0                   c2b2f0c2bdc2e       kindnet-ptbpb
	5d1d7d03658b7       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Exited              kube-proxy                0                   a6a3b389836fc       kube-proxy-758vr
	0624b371b469a       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      14 minutes ago       Exited              kube-scheduler            0                   ba41c766be2d5       kube-scheduler-ha-863044
	acf9154524991       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      14 minutes ago       Exited              etcd                      0                   1825ea5e56cf4       etcd-ha-863044
	
	
	==> coredns [770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e] <==
	[INFO] 10.244.1.2:32830 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116945s
	[INFO] 10.244.1.2:51392 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008307s
	[INFO] 10.244.0.4:42010 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00031726s
	[INFO] 10.244.2.2:44915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143127s
	[INFO] 10.244.2.2:37741 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170015s
	[INFO] 10.244.2.2:58647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130581s
	[INFO] 10.244.1.2:49418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247229s
	[INFO] 10.244.1.2:44042 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127451s
	[INFO] 10.244.1.2:41801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00015235s
	[INFO] 10.244.1.2:51078 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176731s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1544927018]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 00:30:18.089) (total time: 12825ms):
	Trace[1544927018]: ---"Objects listed" error:Unauthorized 12825ms (00:30:30.915)
	Trace[1544927018]: [12.825751377s] [12.825751377s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[324365653]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 00:30:18.419) (total time: 12495ms):
	Trace[324365653]: ---"Objects listed" error:Unauthorized 12495ms (00:30:30.915)
	Trace[324365653]: [12.495841363s] [12.495841363s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7eb4acac741dc891d1b4d79b3df6a6ad843a76de4536ee11e93532fd02f87049] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50096->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50096->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41504->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[836955325]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 00:32:32.598) (total time: 10484ms):
	Trace[836955325]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41504->10.96.0.1:443: read: connection reset by peer 10484ms (00:32:43.083)
	Trace[836955325]: [10.484939563s] [10.484939563s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41504->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8e036bda4ed25a5465915f56b707b3d01dc5d8fb9d6660380dd74454f867eba0] <==
	Trace[1649646237]: [10.001212572s] [10.001212572s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[285531728]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 00:32:26.264) (total time: 10001ms):
	Trace[285531728]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:32:36.265)
	Trace[285531728]: [10.001183768s] [10.001183768s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:47528->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:47528->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787] <==
	[INFO] 10.244.1.2:32926 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000109486s
	[INFO] 10.244.0.4:35014 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015446s
	[INFO] 10.244.0.4:46414 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148102s
	[INFO] 10.244.2.2:51282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002016555s
	[INFO] 10.244.2.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001529953s
	[INFO] 10.244.2.2:42863 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00043817s
	[INFO] 10.244.2.2:39074 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067798s
	[INFO] 10.244.1.2:52314 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192016s
	[INFO] 10.244.1.2:58476 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001116995s
	[INFO] 10.244.1.2:39360 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.001839118s
	[INFO] 10.244.0.4:51814 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012471s
	[INFO] 10.244.0.4:40547 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083981s
	[INFO] 10.244.2.2:34181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015996s
	[INFO] 10.244.2.2:56520 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000727856s
	[INFO] 10.244.2.2:38242 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103367s
	[INFO] 10.244.1.2:50032 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110327s
	[INFO] 10.244.0.4:55523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123577s
	[INFO] 10.244.0.4:42586 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010348s
	[INFO] 10.244.0.4:36103 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000184736s
	[INFO] 10.244.2.2:57332 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000163958s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1890&timeout=7m27s&timeoutSeconds=447&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1888&timeout=7m22s&timeoutSeconds=442&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1891&timeout=7m50s&timeoutSeconds=470&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-863044
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_21_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:21:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:35:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:32:57 +0000   Thu, 15 Aug 2024 00:21:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:32:57 +0000   Thu, 15 Aug 2024 00:21:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:32:57 +0000   Thu, 15 Aug 2024 00:21:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:32:57 +0000   Thu, 15 Aug 2024 00:21:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-863044
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e33f2588c28f4daf846273c46c5ec17c
	  System UUID:                e33f2588-c28f-4daf-8462-73c46c5ec17c
	  Boot ID:                    262603d0-6087-4822-8e6c-89d7a28279b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ck6d9              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-6f6b679f8f-bc2jh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-6f6b679f8f-jxpqd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-863044                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-ptbpb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-863044             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-863044    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-758vr                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-863044             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-863044                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-863044 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-863044 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-863044 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-863044 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-863044 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-863044 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-863044 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Warning  ContainerGCFailed        3m58s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m22s (x3 over 4m11s)  kubelet          Node ha-863044 status is now: NodeNotReady
	  Normal   RegisteredNode           2m23s                  node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Normal   RegisteredNode           111s                   node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	
	
	Name:               ha-863044-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_22_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:22:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:35:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:33:41 +0000   Thu, 15 Aug 2024 00:33:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:33:41 +0000   Thu, 15 Aug 2024 00:33:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:33:41 +0000   Thu, 15 Aug 2024 00:33:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:33:41 +0000   Thu, 15 Aug 2024 00:33:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    ha-863044-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 877b666314684accbfd657286f8d0095
	  System UUID:                877b6663-1468-4acc-bfd6-57286f8d0095
	  Boot ID:                    608ac5ca-dc01-4492-ae62-64b381450129
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zmr7b                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-863044-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-xpnzd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-863044-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-863044-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-6l4gp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-863044-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-863044-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m                     kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-863044-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-863044-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-863044-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  NodeNotReady             9m28s                  node-controller  Node ha-863044-m02 status is now: NodeNotReady
	  Normal  Starting                 2m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m45s (x8 over 2m45s)  kubelet          Node ha-863044-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m45s (x8 over 2m45s)  kubelet          Node ha-863044-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m45s (x7 over 2m45s)  kubelet          Node ha-863044-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m23s                  node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  RegisteredNode           111s                   node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  RegisteredNode           39s                    node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	
	
	Name:               ha-863044-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_23_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:23:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:35:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:34:53 +0000   Thu, 15 Aug 2024 00:34:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:34:53 +0000   Thu, 15 Aug 2024 00:34:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:34:53 +0000   Thu, 15 Aug 2024 00:34:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:34:53 +0000   Thu, 15 Aug 2024 00:34:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.30
	  Hostname:    ha-863044-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bba0a91434394dddbc59d67dd539b2b7
	  System UUID:                bba0a914-3439-4ddd-bc59-d67dd539b2b7
	  Boot ID:                    4a01ee9b-f64e-4ea0-92f3-0c3c1ff973bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dpcjf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-863044-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-jdl2d                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-863044-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-863044-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-qxmqn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-863044-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-863044-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 38s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-863044-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-863044-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-863044-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-863044-m03 event: Registered Node ha-863044-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-863044-m03 event: Registered Node ha-863044-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-863044-m03 event: Registered Node ha-863044-m03 in Controller
	  Normal   RegisteredNode           2m23s              node-controller  Node ha-863044-m03 event: Registered Node ha-863044-m03 in Controller
	  Normal   RegisteredNode           111s               node-controller  Node ha-863044-m03 event: Registered Node ha-863044-m03 in Controller
	  Normal   NodeNotReady             103s               node-controller  Node ha-863044-m03 status is now: NodeNotReady
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  55s (x2 over 55s)  kubelet          Node ha-863044-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s (x2 over 55s)  kubelet          Node ha-863044-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x2 over 55s)  kubelet          Node ha-863044-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 55s                kubelet          Node ha-863044-m03 has been rebooted, boot id: 4a01ee9b-f64e-4ea0-92f3-0c3c1ff973bb
	  Normal   NodeReady                55s                kubelet          Node ha-863044-m03 status is now: NodeReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-863044-m03 event: Registered Node ha-863044-m03 in Controller
	
	
	Name:               ha-863044-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_24_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:24:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:35:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:35:09 +0000   Thu, 15 Aug 2024 00:35:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:35:09 +0000   Thu, 15 Aug 2024 00:35:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:35:09 +0000   Thu, 15 Aug 2024 00:35:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:35:09 +0000   Thu, 15 Aug 2024 00:35:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    ha-863044-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 29de5816079a4aa6bb73571d88da2d1b
	  System UUID:                29de5816-079a-4aa6-bb73-571d88da2d1b
	  Boot ID:                    ab22bda9-429f-4e7b-925a-d953cf540ee2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7r4h2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-72j9n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-863044-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-863044-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-863044-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-863044-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m23s              node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal   RegisteredNode           111s               node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal   NodeNotReady             103s               node-controller  Node ha-863044-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-863044-m04 has been rebooted, boot id: ab22bda9-429f-4e7b-925a-d953cf540ee2
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-863044-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-863044-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-863044-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             8s                 kubelet          Node ha-863044-m04 status is now: NodeNotReady
	  Normal   NodeReady                8s                 kubelet          Node ha-863044-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug15 00:21] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.061023] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060159] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.174439] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.118153] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.259429] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +3.778855] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.212652] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.060600] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.151808] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.077604] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.187372] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.703882] kauditd_printk_skb: 23 callbacks suppressed
	[Aug15 00:22] kauditd_printk_skb: 26 callbacks suppressed
	[Aug15 00:32] systemd-fstab-generator[3502]: Ignoring "noauto" option for root device
	[  +0.145711] systemd-fstab-generator[3514]: Ignoring "noauto" option for root device
	[  +0.161289] systemd-fstab-generator[3528]: Ignoring "noauto" option for root device
	[  +0.140986] systemd-fstab-generator[3540]: Ignoring "noauto" option for root device
	[  +0.258789] systemd-fstab-generator[3568]: Ignoring "noauto" option for root device
	[  +5.731086] systemd-fstab-generator[3672]: Ignoring "noauto" option for root device
	[  +0.087193] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.541912] kauditd_printk_skb: 12 callbacks suppressed
	[ +14.577284] kauditd_printk_skb: 86 callbacks suppressed
	[ +19.235744] kauditd_printk_skb: 1 callbacks suppressed
	[Aug15 00:33] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6] <==
	{"level":"warn","ts":"2024-08-15T00:30:32.946689Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:30:31.938008Z","time spent":"1.008671699s","remote":"127.0.0.1:58916","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" "}
	2024/08/15 00:30:32 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-15T00:30:32.985380Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.6:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T00:30:32.985420Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.6:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T00:30:32.985553Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6f26d2d338759d80","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-15T00:30:32.985678Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.985707Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.985737Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.985774Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.985885Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.985968Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.985999Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.986007Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.986018Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.986125Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.986293Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.986338Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.986419Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.986482Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.989601Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"warn","ts":"2024-08-15T00:30:32.989699Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.090958258s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-15T00:30:32.989735Z","caller":"traceutil/trace.go:171","msg":"trace[573312896] range","detail":"{range_begin:; range_end:; }","duration":"9.091008675s","start":"2024-08-15T00:30:23.898718Z","end":"2024-08-15T00:30:32.989727Z","steps":["trace[573312896] 'agreement among raft nodes before linearized reading'  (duration: 9.090957342s)"],"step_count":1}
	{"level":"error","ts":"2024-08-15T00:30:32.989790Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-15T00:30:32.990495Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-08-15T00:30:32.990527Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-863044","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"]}
	
	
	==> etcd [dc0cce4b13205c74f2414baaf67c2da86c94f4e00b516df95cf6c2777cdccfed] <==
	{"level":"warn","ts":"2024-08-15T00:34:17.201867Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:34:17.215641Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6f26d2d338759d80","from":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T00:34:17.292312Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.30:2380/version","remote-member-id":"fd5a13d6251910c6","error":"Get \"https://192.168.39.30:2380/version\": dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:17.292361Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"fd5a13d6251910c6","error":"Get \"https://192.168.39.30:2380/version\": dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:18.569882Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"fd5a13d6251910c6","rtt":"0s","error":"dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:18.569915Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"fd5a13d6251910c6","rtt":"0s","error":"dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:21.294331Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.30:2380/version","remote-member-id":"fd5a13d6251910c6","error":"Get \"https://192.168.39.30:2380/version\": dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:21.294371Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"fd5a13d6251910c6","error":"Get \"https://192.168.39.30:2380/version\": dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:23.570727Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"fd5a13d6251910c6","rtt":"0s","error":"dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:23.570773Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"fd5a13d6251910c6","rtt":"0s","error":"dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:25.295595Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.30:2380/version","remote-member-id":"fd5a13d6251910c6","error":"Get \"https://192.168.39.30:2380/version\": dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:25.295652Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"fd5a13d6251910c6","error":"Get \"https://192.168.39.30:2380/version\": dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:28.571231Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"fd5a13d6251910c6","rtt":"0s","error":"dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:28.571321Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"fd5a13d6251910c6","rtt":"0s","error":"dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:29.298192Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.30:2380/version","remote-member-id":"fd5a13d6251910c6","error":"Get \"https://192.168.39.30:2380/version\": dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:29.298320Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"fd5a13d6251910c6","error":"Get \"https://192.168.39.30:2380/version\": dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-15T00:34:31.621005Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:34:31.623242Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:34:31.623726Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:34:31.649773Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6f26d2d338759d80","to":"fd5a13d6251910c6","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-15T00:34:31.649894Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:34:31.654973Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6f26d2d338759d80","to":"fd5a13d6251910c6","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-15T00:34:31.655098Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"warn","ts":"2024-08-15T00:34:33.571608Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"fd5a13d6251910c6","rtt":"0s","error":"dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:33.571738Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"fd5a13d6251910c6","rtt":"0s","error":"dial tcp 192.168.39.30:2380: connect: connection refused"}
	
	
	==> kernel <==
	 00:35:18 up 14 min,  0 users,  load average: 0.61, 0.42, 0.24
	Linux ha-863044 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d] <==
	I0815 00:30:08.919661       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:30:08.919802       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:30:08.920000       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:30:08.920099       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:30:08.920226       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:30:08.920269       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:30:08.920372       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:30:08.920413       1 main.go:299] handling current node
	I0815 00:30:18.924312       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:30:18.924407       1 main.go:299] handling current node
	I0815 00:30:18.924439       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:30:18.924470       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:30:18.924635       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:30:18.924657       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:30:18.924726       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:30:18.924745       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	E0815 00:30:21.195607       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1883&timeout=6m1s&timeoutSeconds=361&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0815 00:30:28.920419       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:30:28.920467       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:30:28.920642       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:30:28.920663       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:30:28.920735       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:30:28.920751       1 main.go:299] handling current node
	I0815 00:30:28.920783       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:30:28.920797       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [5a78aff1a6bd80d12b09da54ca90018fb8d7a3d1dc39978646568195d876a17f] <==
	I0815 00:34:38.956299       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:34:48.954540       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:34:48.954715       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:34:48.954903       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:34:48.954938       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:34:48.955085       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:34:48.955107       1 main.go:299] handling current node
	I0815 00:34:48.955123       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:34:48.955129       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:34:58.959198       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:34:58.959331       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:34:58.959614       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:34:58.959740       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:34:58.959875       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:34:58.959903       1 main.go:299] handling current node
	I0815 00:34:58.959925       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:34:58.959941       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:35:08.955194       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:35:08.955337       1 main.go:299] handling current node
	I0815 00:35:08.955366       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:35:08.955386       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:35:08.955541       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:35:08.955576       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:35:08.955661       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:35:08.955682       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3177b3c6875f29527f81c74a5d3bc9b56b139cf1917c0375badeed94ad13304f] <==
	I0815 00:32:18.156942       1 options.go:228] external host was not specified, using 192.168.39.6
	I0815 00:32:18.175563       1 server.go:142] Version: v1.31.0
	I0815 00:32:18.175609       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:32:19.035925       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 00:32:19.046900       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 00:32:19.051664       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 00:32:19.051694       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 00:32:19.051934       1 instance.go:232] Using reconciler: lease
	W0815 00:32:39.033585       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0815 00:32:39.033834       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0815 00:32:39.052887       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0815 00:32:39.052992       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f92aa390854b4fe628e75613f1124beebe9adb2ded49dc3bc7b7f04ab6ad5cff] <==
	I0815 00:33:01.568813       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0815 00:33:01.568883       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0815 00:33:01.643246       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 00:33:01.644220       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 00:33:01.646283       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 00:33:01.646323       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 00:33:01.646512       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 00:33:01.646562       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 00:33:01.646568       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 00:33:01.644349       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 00:33:01.644361       1 aggregator.go:171] initial CRD sync complete...
	I0815 00:33:01.647821       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 00:33:01.647829       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 00:33:01.647833       1 cache.go:39] Caches are synced for autoregister controller
	I0815 00:33:01.653550       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0815 00:33:01.655203       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.170 192.168.39.30]
	I0815 00:33:01.666670       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 00:33:01.677433       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 00:33:01.677470       1 policy_source.go:224] refreshing policies
	I0815 00:33:01.727950       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 00:33:01.758341       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 00:33:01.767726       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0815 00:33:01.771350       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0815 00:33:02.546158       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 00:33:02.884551       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.170 192.168.39.30 192.168.39.6]
	
	
	==> kube-controller-manager [0955874b3483b218b53b75431581f070ae0a22230f550a7d8b78775608b5558a] <==
	I0815 00:32:51.077774       1 serving.go:386] Generated self-signed cert in-memory
	I0815 00:32:51.714851       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 00:32:51.714972       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:32:51.717274       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 00:32:51.717410       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 00:32:51.717876       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 00:32:51.717983       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0815 00:33:01.724616       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[-]poststarthook/bootstrap-controller failed: reason withheld\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [7481189cf3801cc3c33a3eb3a11315b91f505f5119b9fded6d4fb163acec80fe] <==
	I0815 00:33:34.258831       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m03"
	I0815 00:33:34.439198       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.948999ms"
	I0815 00:33:34.439380       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="135.514µs"
	I0815 00:33:36.421824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m03"
	I0815 00:33:39.485740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m03"
	I0815 00:33:41.882489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m02"
	I0815 00:33:46.506498       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:33:49.473922       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="15.571226ms"
	I0815 00:33:49.475249       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="73.397µs"
	I0815 00:33:49.505866       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-7tqfq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-7tqfq\": the object has been modified; please apply your changes to the latest version and try again"
	I0815 00:33:49.505998       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e72e0335-4fb6-4b94-bbe5-0eee3a632744", APIVersion:"v1", ResourceVersion:"283", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-7tqfq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-7tqfq": the object has been modified; please apply your changes to the latest version and try again
	I0815 00:33:49.569669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:34:22.598254       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m03"
	I0815 00:34:22.616514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m03"
	I0815 00:34:23.491589       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.383µs"
	I0815 00:34:24.444306       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m03"
	I0815 00:34:38.746204       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:34:38.846995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:34:41.736538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.294242ms"
	I0815 00:34:41.739722       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="82.346µs"
	I0815 00:34:53.227910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m03"
	I0815 00:35:09.459336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:35:09.459443       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-863044-m04"
	I0815 00:35:09.481665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:35:09.692804       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	
	
	==> kube-proxy [1d908dbe9fbecf3439554cdfd533fbd8edc65fd0fc302dafafd14e7584f88a73] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 00:32:19.531764       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863044\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 00:32:22.603932       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863044\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 00:32:25.675509       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863044\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 00:32:31.819512       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863044\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 00:32:41.035666       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863044\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0815 00:33:00.249005       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	E0815 00:33:00.249208       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:33:00.360674       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 00:33:00.360764       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 00:33:00.360816       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:33:00.363127       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:33:00.363450       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:33:00.363636       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:33:00.365406       1 config.go:197] "Starting service config controller"
	I0815 00:33:00.365479       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:33:00.365532       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:33:00.365560       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:33:00.366339       1 config.go:326] "Starting node config controller"
	I0815 00:33:00.366420       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:33:00.466119       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:33:00.466259       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:33:00.468412       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a] <==
	E0815 00:29:18.219423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:18.219373       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:18.219535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:21.355526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:21.355648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:24.427378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:24.427491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:27.500014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:27.500266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:27.500415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:27.500475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:36.717022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:36.717559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:39.790180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:39.790300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:39.790504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:39.790605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:58.220892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:58.220973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:30:01.292407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:30:01.292667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:30:04.364299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:30:04.364581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:30:32.011518       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:30:32.011647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c] <==
	I0815 00:24:34.809950       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hhvjh" node="ha-863044-m04"
	E0815 00:24:34.844902       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5ptdm\": pod kube-proxy-5ptdm is already assigned to node \"ha-863044-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5ptdm" node="ha-863044-m04"
	E0815 00:24:34.845683       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5ac2ee81-5268-49b4-80fc-2b9950b30cad(kube-system/kube-proxy-5ptdm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5ptdm"
	E0815 00:24:34.845833       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5ptdm\": pod kube-proxy-5ptdm is already assigned to node \"ha-863044-m04\"" pod="kube-system/kube-proxy-5ptdm"
	I0815 00:24:34.845899       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5ptdm" node="ha-863044-m04"
	E0815 00:30:09.525295       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0815 00:30:09.525552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0815 00:30:09.525659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0815 00:30:19.575178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0815 00:30:19.839730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0815 00:30:20.899645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0815 00:30:22.154277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	W0815 00:30:23.073427       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:30:23.073520       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	E0815 00:30:24.085015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0815 00:30:25.033302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0815 00:30:25.557857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	W0815 00:30:27.064441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 00:30:27.064490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0815 00:30:27.532107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0815 00:30:28.624966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0815 00:30:30.306291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0815 00:30:31.077778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0815 00:30:31.404176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0815 00:30:32.921230       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [af5b4659b9ea138e22072962382d618ca8b5f50e46861131601f65a468f1ec69] <==
	W0815 00:32:53.505975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.6:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:53.506129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.6:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:54.338562       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.6:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:54.338725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.6:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:54.729817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.6:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:54.729947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.6:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:55.018602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.6:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:55.018676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.6:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:55.498187       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.6:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:55.498326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.6:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:57.275850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.6:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:57.275894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.6:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:57.519491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.6:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:57.519607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.6:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:57.929017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.6:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:57.929657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.6:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:58.667574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.6:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:58.667694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.6:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:58.805689       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.6:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:58.805748       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.6:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:58.979816       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.6:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:58.979931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.6:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:59.042756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.6:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:59.042878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.6:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	I0815 00:33:15.768976       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 00:33:41 ha-863044 kubelet[1326]: I0815 00:33:41.904384    1326 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-863044"
	Aug 15 00:33:42 ha-863044 kubelet[1326]: I0815 00:33:42.472298    1326 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-863044" podUID="ff875a81-1ee8-4073-a666-4f9dc4239e38"
	Aug 15 00:33:50 ha-863044 kubelet[1326]: E0815 00:33:50.095892    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682030095653206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:33:50 ha-863044 kubelet[1326]: E0815 00:33:50.095915    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682030095653206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:34:00 ha-863044 kubelet[1326]: E0815 00:34:00.098676    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682040098240288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:34:00 ha-863044 kubelet[1326]: E0815 00:34:00.098716    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682040098240288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:34:10 ha-863044 kubelet[1326]: E0815 00:34:10.100734    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682050100146039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:34:10 ha-863044 kubelet[1326]: E0815 00:34:10.100800    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682050100146039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:34:19 ha-863044 kubelet[1326]: E0815 00:34:19.908157    1326 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 00:34:19 ha-863044 kubelet[1326]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 00:34:19 ha-863044 kubelet[1326]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 00:34:19 ha-863044 kubelet[1326]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 00:34:19 ha-863044 kubelet[1326]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 00:34:20 ha-863044 kubelet[1326]: E0815 00:34:20.103538    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682060103151532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:34:20 ha-863044 kubelet[1326]: E0815 00:34:20.103579    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682060103151532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:34:30 ha-863044 kubelet[1326]: E0815 00:34:30.106221    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682070105681595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:34:30 ha-863044 kubelet[1326]: E0815 00:34:30.106260    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682070105681595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:34:40 ha-863044 kubelet[1326]: E0815 00:34:40.108675    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682080108099994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:34:40 ha-863044 kubelet[1326]: E0815 00:34:40.108727    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682080108099994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:34:50 ha-863044 kubelet[1326]: E0815 00:34:50.110279    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682090109734974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:34:50 ha-863044 kubelet[1326]: E0815 00:34:50.110612    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682090109734974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:35:00 ha-863044 kubelet[1326]: E0815 00:35:00.112514    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682100112078971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:35:00 ha-863044 kubelet[1326]: E0815 00:35:00.113115    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682100112078971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:35:10 ha-863044 kubelet[1326]: E0815 00:35:10.115346    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682110114929414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:35:10 ha-863044 kubelet[1326]: E0815 00:35:10.115810    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682110114929414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 00:35:16.900765   38349 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19443-13088/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-863044 -n ha-863044
helpers_test.go:261: (dbg) Run:  kubectl --context ha-863044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (409.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863044 stop -v=7 --alsologtostderr: exit status 82 (2m0.456872205s)

                                                
                                                
-- stdout --
	* Stopping node "ha-863044-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:35:35.950638   38761 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:35:35.950880   38761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:35:35.950888   38761 out.go:304] Setting ErrFile to fd 2...
	I0815 00:35:35.950892   38761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:35:35.951476   38761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:35:35.951851   38761 out.go:298] Setting JSON to false
	I0815 00:35:35.951953   38761 mustload.go:65] Loading cluster: ha-863044
	I0815 00:35:35.952541   38761 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:35:35.952678   38761 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:35:35.952884   38761 mustload.go:65] Loading cluster: ha-863044
	I0815 00:35:35.953038   38761 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:35:35.953078   38761 stop.go:39] StopHost: ha-863044-m04
	I0815 00:35:35.953438   38761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:35:35.953492   38761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:35:35.968331   38761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0815 00:35:35.968820   38761 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:35:35.969319   38761 main.go:141] libmachine: Using API Version  1
	I0815 00:35:35.969340   38761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:35:35.969703   38761 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:35:35.971972   38761 out.go:177] * Stopping node "ha-863044-m04"  ...
	I0815 00:35:35.973068   38761 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 00:35:35.973098   38761 main.go:141] libmachine: (ha-863044-m04) Calling .DriverName
	I0815 00:35:35.973305   38761 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 00:35:35.973322   38761 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHHostname
	I0815 00:35:35.975779   38761 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:35:35.976191   38761 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:35:04 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:35:35.976219   38761 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:35:35.976404   38761 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHPort
	I0815 00:35:35.976627   38761 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHKeyPath
	I0815 00:35:35.976812   38761 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHUsername
	I0815 00:35:35.976970   38761 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m04/id_rsa Username:docker}
	I0815 00:35:36.063365   38761 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 00:35:36.116256   38761 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 00:35:36.168535   38761 main.go:141] libmachine: Stopping "ha-863044-m04"...
	I0815 00:35:36.168573   38761 main.go:141] libmachine: (ha-863044-m04) Calling .GetState
	I0815 00:35:36.170013   38761 main.go:141] libmachine: (ha-863044-m04) Calling .Stop
	I0815 00:35:36.173650   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 0/120
	I0815 00:35:37.175426   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 1/120
	I0815 00:35:38.176628   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 2/120
	I0815 00:35:39.177992   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 3/120
	I0815 00:35:40.179389   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 4/120
	I0815 00:35:41.180901   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 5/120
	I0815 00:35:42.183058   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 6/120
	I0815 00:35:43.184432   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 7/120
	I0815 00:35:44.185661   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 8/120
	I0815 00:35:45.187084   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 9/120
	I0815 00:35:46.189338   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 10/120
	I0815 00:35:47.191048   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 11/120
	I0815 00:35:48.192420   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 12/120
	I0815 00:35:49.193764   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 13/120
	I0815 00:35:50.195024   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 14/120
	I0815 00:35:51.196842   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 15/120
	I0815 00:35:52.199007   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 16/120
	I0815 00:35:53.200322   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 17/120
	I0815 00:35:54.201988   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 18/120
	I0815 00:35:55.203175   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 19/120
	I0815 00:35:56.204981   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 20/120
	I0815 00:35:57.206489   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 21/120
	I0815 00:35:58.207712   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 22/120
	I0815 00:35:59.208951   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 23/120
	I0815 00:36:00.211297   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 24/120
	I0815 00:36:01.212978   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 25/120
	I0815 00:36:02.215423   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 26/120
	I0815 00:36:03.216821   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 27/120
	I0815 00:36:04.219009   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 28/120
	I0815 00:36:05.220671   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 29/120
	I0815 00:36:06.222637   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 30/120
	I0815 00:36:07.223896   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 31/120
	I0815 00:36:08.225189   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 32/120
	I0815 00:36:09.226926   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 33/120
	I0815 00:36:10.228321   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 34/120
	I0815 00:36:11.230038   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 35/120
	I0815 00:36:12.231534   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 36/120
	I0815 00:36:13.232671   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 37/120
	I0815 00:36:14.234045   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 38/120
	I0815 00:36:15.235196   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 39/120
	I0815 00:36:16.237280   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 40/120
	I0815 00:36:17.238568   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 41/120
	I0815 00:36:18.240064   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 42/120
	I0815 00:36:19.241360   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 43/120
	I0815 00:36:20.243117   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 44/120
	I0815 00:36:21.245178   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 45/120
	I0815 00:36:22.246526   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 46/120
	I0815 00:36:23.247673   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 47/120
	I0815 00:36:24.249500   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 48/120
	I0815 00:36:25.250899   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 49/120
	I0815 00:36:26.252946   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 50/120
	I0815 00:36:27.254177   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 51/120
	I0815 00:36:28.255416   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 52/120
	I0815 00:36:29.256801   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 53/120
	I0815 00:36:30.258176   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 54/120
	I0815 00:36:31.260324   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 55/120
	I0815 00:36:32.261600   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 56/120
	I0815 00:36:33.263424   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 57/120
	I0815 00:36:34.264621   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 58/120
	I0815 00:36:35.265869   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 59/120
	I0815 00:36:36.267889   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 60/120
	I0815 00:36:37.269133   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 61/120
	I0815 00:36:38.270512   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 62/120
	I0815 00:36:39.271788   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 63/120
	I0815 00:36:40.273078   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 64/120
	I0815 00:36:41.275127   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 65/120
	I0815 00:36:42.276477   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 66/120
	I0815 00:36:43.277786   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 67/120
	I0815 00:36:44.279541   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 68/120
	I0815 00:36:45.281028   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 69/120
	I0815 00:36:46.283094   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 70/120
	I0815 00:36:47.284394   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 71/120
	I0815 00:36:48.286199   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 72/120
	I0815 00:36:49.287490   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 73/120
	I0815 00:36:50.288937   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 74/120
	I0815 00:36:51.290773   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 75/120
	I0815 00:36:52.292104   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 76/120
	I0815 00:36:53.294236   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 77/120
	I0815 00:36:54.295520   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 78/120
	I0815 00:36:55.296916   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 79/120
	I0815 00:36:56.299017   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 80/120
	I0815 00:36:57.301472   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 81/120
	I0815 00:36:58.303148   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 82/120
	I0815 00:36:59.304536   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 83/120
	I0815 00:37:00.305843   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 84/120
	I0815 00:37:01.307850   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 85/120
	I0815 00:37:02.309302   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 86/120
	I0815 00:37:03.310626   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 87/120
	I0815 00:37:04.312008   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 88/120
	I0815 00:37:05.313309   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 89/120
	I0815 00:37:06.315094   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 90/120
	I0815 00:37:07.316398   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 91/120
	I0815 00:37:08.317876   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 92/120
	I0815 00:37:09.319184   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 93/120
	I0815 00:37:10.320567   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 94/120
	I0815 00:37:11.322371   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 95/120
	I0815 00:37:12.323740   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 96/120
	I0815 00:37:13.325174   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 97/120
	I0815 00:37:14.326745   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 98/120
	I0815 00:37:15.327921   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 99/120
	I0815 00:37:16.330051   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 100/120
	I0815 00:37:17.331531   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 101/120
	I0815 00:37:18.332877   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 102/120
	I0815 00:37:19.334958   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 103/120
	I0815 00:37:20.337049   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 104/120
	I0815 00:37:21.338493   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 105/120
	I0815 00:37:22.340220   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 106/120
	I0815 00:37:23.341753   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 107/120
	I0815 00:37:24.343174   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 108/120
	I0815 00:37:25.344437   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 109/120
	I0815 00:37:26.346103   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 110/120
	I0815 00:37:27.347501   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 111/120
	I0815 00:37:28.348968   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 112/120
	I0815 00:37:29.350042   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 113/120
	I0815 00:37:30.351229   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 114/120
	I0815 00:37:31.353107   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 115/120
	I0815 00:37:32.354411   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 116/120
	I0815 00:37:33.355741   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 117/120
	I0815 00:37:34.357142   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 118/120
	I0815 00:37:35.358362   38761 main.go:141] libmachine: (ha-863044-m04) Waiting for machine to stop 119/120
	I0815 00:37:36.359514   38761 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 00:37:36.359597   38761 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0815 00:37:36.361210   38761 out.go:177] 
	W0815 00:37:36.362723   38761 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0815 00:37:36.362742   38761 out.go:239] * 
	* 
	W0815 00:37:36.364998   38761 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 00:37:36.366179   38761 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-863044 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr: exit status 3 (18.89757222s)

                                                
                                                
-- stdout --
	ha-863044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-863044-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:37:36.408075   39184 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:37:36.408185   39184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:37:36.408193   39184 out.go:304] Setting ErrFile to fd 2...
	I0815 00:37:36.408197   39184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:37:36.408360   39184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:37:36.408506   39184 out.go:298] Setting JSON to false
	I0815 00:37:36.408525   39184 mustload.go:65] Loading cluster: ha-863044
	I0815 00:37:36.408560   39184 notify.go:220] Checking for updates...
	I0815 00:37:36.408960   39184 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:37:36.408977   39184 status.go:255] checking status of ha-863044 ...
	I0815 00:37:36.409436   39184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:37:36.409504   39184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:37:36.429303   39184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I0815 00:37:36.429729   39184 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:37:36.430203   39184 main.go:141] libmachine: Using API Version  1
	I0815 00:37:36.430229   39184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:37:36.430540   39184 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:37:36.430723   39184 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:37:36.432091   39184 status.go:330] ha-863044 host status = "Running" (err=<nil>)
	I0815 00:37:36.432103   39184 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:37:36.432366   39184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:37:36.432402   39184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:37:36.446557   39184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0815 00:37:36.446923   39184 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:37:36.447381   39184 main.go:141] libmachine: Using API Version  1
	I0815 00:37:36.447413   39184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:37:36.447714   39184 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:37:36.447926   39184 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:37:36.450577   39184 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:37:36.450954   39184 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:37:36.450986   39184 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:37:36.451075   39184 host.go:66] Checking if "ha-863044" exists ...
	I0815 00:37:36.451355   39184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:37:36.451400   39184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:37:36.466108   39184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I0815 00:37:36.466525   39184 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:37:36.466997   39184 main.go:141] libmachine: Using API Version  1
	I0815 00:37:36.467021   39184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:37:36.467316   39184 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:37:36.467475   39184 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:37:36.467651   39184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:37:36.467689   39184 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:37:36.470663   39184 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:37:36.471060   39184 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:37:36.471086   39184 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:37:36.471202   39184 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:37:36.471363   39184 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:37:36.471593   39184 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:37:36.471817   39184 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:37:36.556829   39184 ssh_runner.go:195] Run: systemctl --version
	I0815 00:37:36.563552   39184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:37:36.578498   39184 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:37:36.578524   39184 api_server.go:166] Checking apiserver status ...
	I0815 00:37:36.578560   39184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:37:36.593102   39184 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4909/cgroup
	W0815 00:37:36.601747   39184 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4909/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:37:36.601795   39184 ssh_runner.go:195] Run: ls
	I0815 00:37:36.606065   39184 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:37:36.612309   39184 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:37:36.612326   39184 status.go:422] ha-863044 apiserver status = Running (err=<nil>)
	I0815 00:37:36.612335   39184 status.go:257] ha-863044 status: &{Name:ha-863044 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:37:36.612349   39184 status.go:255] checking status of ha-863044-m02 ...
	I0815 00:37:36.612627   39184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:37:36.612681   39184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:37:36.628624   39184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40257
	I0815 00:37:36.629027   39184 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:37:36.629460   39184 main.go:141] libmachine: Using API Version  1
	I0815 00:37:36.629481   39184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:37:36.629758   39184 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:37:36.629938   39184 main.go:141] libmachine: (ha-863044-m02) Calling .GetState
	I0815 00:37:36.631637   39184 status.go:330] ha-863044-m02 host status = "Running" (err=<nil>)
	I0815 00:37:36.631653   39184 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:37:36.632028   39184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:37:36.632064   39184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:37:36.645970   39184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32919
	I0815 00:37:36.646343   39184 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:37:36.646789   39184 main.go:141] libmachine: Using API Version  1
	I0815 00:37:36.646817   39184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:37:36.647108   39184 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:37:36.647291   39184 main.go:141] libmachine: (ha-863044-m02) Calling .GetIP
	I0815 00:37:36.650166   39184 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:37:36.650569   39184 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:32:21 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:37:36.650592   39184 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:37:36.650751   39184 host.go:66] Checking if "ha-863044-m02" exists ...
	I0815 00:37:36.651027   39184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:37:36.651065   39184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:37:36.665844   39184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0815 00:37:36.666246   39184 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:37:36.666668   39184 main.go:141] libmachine: Using API Version  1
	I0815 00:37:36.666686   39184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:37:36.666988   39184 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:37:36.667140   39184 main.go:141] libmachine: (ha-863044-m02) Calling .DriverName
	I0815 00:37:36.667322   39184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:37:36.667344   39184 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHHostname
	I0815 00:37:36.669631   39184 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:37:36.670032   39184 main.go:141] libmachine: (ha-863044-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:19:c9", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:32:21 +0000 UTC Type:0 Mac:52:54:00:4e:19:c9 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-863044-m02 Clientid:01:52:54:00:4e:19:c9}
	I0815 00:37:36.670056   39184 main.go:141] libmachine: (ha-863044-m02) DBG | domain ha-863044-m02 has defined IP address 192.168.39.170 and MAC address 52:54:00:4e:19:c9 in network mk-ha-863044
	I0815 00:37:36.670201   39184 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHPort
	I0815 00:37:36.670393   39184 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHKeyPath
	I0815 00:37:36.670592   39184 main.go:141] libmachine: (ha-863044-m02) Calling .GetSSHUsername
	I0815 00:37:36.670750   39184 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m02/id_rsa Username:docker}
	I0815 00:37:36.761031   39184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:37:36.777749   39184 kubeconfig.go:125] found "ha-863044" server: "https://192.168.39.254:8443"
	I0815 00:37:36.777772   39184 api_server.go:166] Checking apiserver status ...
	I0815 00:37:36.777810   39184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:37:36.791941   39184 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup
	W0815 00:37:36.800736   39184 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:37:36.800792   39184 ssh_runner.go:195] Run: ls
	I0815 00:37:36.804703   39184 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 00:37:36.808842   39184 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 00:37:36.808862   39184 status.go:422] ha-863044-m02 apiserver status = Running (err=<nil>)
	I0815 00:37:36.808874   39184 status.go:257] ha-863044-m02 status: &{Name:ha-863044-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:37:36.808896   39184 status.go:255] checking status of ha-863044-m04 ...
	I0815 00:37:36.809178   39184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:37:36.809215   39184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:37:36.823490   39184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36855
	I0815 00:37:36.823888   39184 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:37:36.824357   39184 main.go:141] libmachine: Using API Version  1
	I0815 00:37:36.824387   39184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:37:36.824672   39184 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:37:36.824853   39184 main.go:141] libmachine: (ha-863044-m04) Calling .GetState
	I0815 00:37:36.826343   39184 status.go:330] ha-863044-m04 host status = "Running" (err=<nil>)
	I0815 00:37:36.826362   39184 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:37:36.826640   39184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:37:36.826669   39184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:37:36.840522   39184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35119
	I0815 00:37:36.840954   39184 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:37:36.841488   39184 main.go:141] libmachine: Using API Version  1
	I0815 00:37:36.841506   39184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:37:36.841785   39184 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:37:36.841967   39184 main.go:141] libmachine: (ha-863044-m04) Calling .GetIP
	I0815 00:37:36.844689   39184 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:37:36.845133   39184 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:35:04 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:37:36.845164   39184 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:37:36.845300   39184 host.go:66] Checking if "ha-863044-m04" exists ...
	I0815 00:37:36.845620   39184 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:37:36.845657   39184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:37:36.861601   39184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0815 00:37:36.861977   39184 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:37:36.862358   39184 main.go:141] libmachine: Using API Version  1
	I0815 00:37:36.862377   39184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:37:36.862706   39184 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:37:36.862865   39184 main.go:141] libmachine: (ha-863044-m04) Calling .DriverName
	I0815 00:37:36.863026   39184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:37:36.863043   39184 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHHostname
	I0815 00:37:36.865708   39184 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:37:36.866114   39184 main.go:141] libmachine: (ha-863044-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:14:6a", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:35:04 +0000 UTC Type:0 Mac:52:54:00:01:14:6a Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-863044-m04 Clientid:01:52:54:00:01:14:6a}
	I0815 00:37:36.866139   39184 main.go:141] libmachine: (ha-863044-m04) DBG | domain ha-863044-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:01:14:6a in network mk-ha-863044
	I0815 00:37:36.866290   39184 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHPort
	I0815 00:37:36.866471   39184 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHKeyPath
	I0815 00:37:36.866604   39184 main.go:141] libmachine: (ha-863044-m04) Calling .GetSSHUsername
	I0815 00:37:36.866786   39184 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044-m04/id_rsa Username:docker}
	W0815 00:37:55.264826   39184 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.247:22: connect: no route to host
	W0815 00:37:55.264898   39184 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.247:22: connect: no route to host
	E0815 00:37:55.264912   39184 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.247:22: connect: no route to host
	I0815 00:37:55.264918   39184 status.go:257] ha-863044-m04 status: &{Name:ha-863044-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0815 00:37:55.264932   39184 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.247:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-863044 -n ha-863044
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-863044 logs -n 25: (1.510920228s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-863044 ssh -n ha-863044-m02 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m03_ha-863044-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04:/home/docker/cp-test_ha-863044-m03_ha-863044-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m04 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m03_ha-863044-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-863044 cp testdata/cp-test.txt                                                | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3188715365/001/cp-test_ha-863044-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044:/home/docker/cp-test_ha-863044-m04_ha-863044.txt                       |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044 sudo cat                                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m04_ha-863044.txt                                 |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m02:/home/docker/cp-test_ha-863044-m04_ha-863044-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m02 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m04_ha-863044-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m03:/home/docker/cp-test_ha-863044-m04_ha-863044-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n                                                                 | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | ha-863044-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-863044 ssh -n ha-863044-m03 sudo cat                                          | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC | 15 Aug 24 00:25 UTC |
	|         | /home/docker/cp-test_ha-863044-m04_ha-863044-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-863044 node stop m02 -v=7                                                     | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-863044 node start m02 -v=7                                                    | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-863044 -v=7                                                           | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-863044 -v=7                                                                | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-863044 --wait=true -v=7                                                    | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:30 UTC | 15 Aug 24 00:35 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-863044                                                                | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:35 UTC |                     |
	| node    | ha-863044 node delete m03 -v=7                                                   | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:35 UTC | 15 Aug 24 00:35 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-863044 stop -v=7                                                              | ha-863044 | jenkins | v1.33.1 | 15 Aug 24 00:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:30:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:30:31.903686   36932 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:30:31.903950   36932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:30:31.903960   36932 out.go:304] Setting ErrFile to fd 2...
	I0815 00:30:31.903964   36932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:30:31.904171   36932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:30:31.904801   36932 out.go:298] Setting JSON to false
	I0815 00:30:31.905736   36932 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4377,"bootTime":1723677455,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:30:31.905792   36932 start.go:139] virtualization: kvm guest
	I0815 00:30:31.908027   36932 out.go:177] * [ha-863044] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:30:31.909644   36932 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:30:31.909681   36932 notify.go:220] Checking for updates...
	I0815 00:30:31.911854   36932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:30:31.913063   36932 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:30:31.914116   36932 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:30:31.915176   36932 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:30:31.916374   36932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:30:31.918691   36932 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:30:31.918847   36932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:30:31.919456   36932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:30:31.919552   36932 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:30:31.934451   36932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I0815 00:30:31.934857   36932 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:30:31.935364   36932 main.go:141] libmachine: Using API Version  1
	I0815 00:30:31.935393   36932 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:30:31.935742   36932 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:30:31.935937   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:30:31.970616   36932 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 00:30:31.971719   36932 start.go:297] selected driver: kvm2
	I0815 00:30:31.971736   36932 start.go:901] validating driver "kvm2" against &{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:30:31.971929   36932 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:30:31.972365   36932 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:30:31.972447   36932 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 00:30:31.986827   36932 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 00:30:31.987693   36932 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:30:31.987776   36932 cni.go:84] Creating CNI manager for ""
	I0815 00:30:31.987792   36932 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 00:30:31.987857   36932 start.go:340] cluster config:
	{Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:30:31.988019   36932 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:30:31.989774   36932 out.go:177] * Starting "ha-863044" primary control-plane node in "ha-863044" cluster
	I0815 00:30:31.990950   36932 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:30:31.990977   36932 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:30:31.990988   36932 cache.go:56] Caching tarball of preloaded images
	I0815 00:30:31.991073   36932 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:30:31.991083   36932 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:30:31.991197   36932 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/config.json ...
	I0815 00:30:31.991397   36932 start.go:360] acquireMachinesLock for ha-863044: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 00:30:31.991437   36932 start.go:364] duration metric: took 22.004µs to acquireMachinesLock for "ha-863044"
	I0815 00:30:31.991454   36932 start.go:96] Skipping create...Using existing machine configuration
	I0815 00:30:31.991467   36932 fix.go:54] fixHost starting: 
	I0815 00:30:31.991753   36932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:30:31.991783   36932 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:30:32.005880   36932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38359
	I0815 00:30:32.006307   36932 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:30:32.006776   36932 main.go:141] libmachine: Using API Version  1
	I0815 00:30:32.006794   36932 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:30:32.007082   36932 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:30:32.007274   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:30:32.007473   36932 main.go:141] libmachine: (ha-863044) Calling .GetState
	I0815 00:30:32.009035   36932 fix.go:112] recreateIfNeeded on ha-863044: state=Running err=<nil>
	W0815 00:30:32.009079   36932 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 00:30:32.010821   36932 out.go:177] * Updating the running kvm2 "ha-863044" VM ...
	I0815 00:30:32.011867   36932 machine.go:94] provisionDockerMachine start ...
	I0815 00:30:32.011882   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:30:32.012057   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:30:32.014453   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.014951   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.014982   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.015103   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:30:32.015257   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.015405   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.015530   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:30:32.015670   36932 main.go:141] libmachine: Using SSH client type: native
	I0815 00:30:32.015841   36932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:30:32.015852   36932 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 00:30:32.133402   36932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863044
	
	I0815 00:30:32.133428   36932 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:30:32.133620   36932 buildroot.go:166] provisioning hostname "ha-863044"
	I0815 00:30:32.133642   36932 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:30:32.133865   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:30:32.136403   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.136773   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.136793   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.136938   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:30:32.137104   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.137237   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.137343   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:30:32.137484   36932 main.go:141] libmachine: Using SSH client type: native
	I0815 00:30:32.137707   36932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:30:32.137721   36932 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-863044 && echo "ha-863044" | sudo tee /etc/hostname
	I0815 00:30:32.263649   36932 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-863044
	
	I0815 00:30:32.263697   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:30:32.266461   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.266806   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.266843   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.267048   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:30:32.267236   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.267380   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.267526   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:30:32.267683   36932 main.go:141] libmachine: Using SSH client type: native
	I0815 00:30:32.267900   36932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:30:32.267918   36932 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-863044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-863044/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-863044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:30:32.381276   36932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:30:32.381306   36932 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 00:30:32.381322   36932 buildroot.go:174] setting up certificates
	I0815 00:30:32.381330   36932 provision.go:84] configureAuth start
	I0815 00:30:32.381338   36932 main.go:141] libmachine: (ha-863044) Calling .GetMachineName
	I0815 00:30:32.381593   36932 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:30:32.384132   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.384510   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.384560   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.384703   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:30:32.386857   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.387158   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.387181   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.387317   36932 provision.go:143] copyHostCerts
	I0815 00:30:32.387352   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:30:32.387381   36932 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 00:30:32.387402   36932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:30:32.387472   36932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 00:30:32.387576   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:30:32.387602   36932 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 00:30:32.387611   36932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:30:32.387640   36932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 00:30:32.387712   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:30:32.387734   36932 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 00:30:32.387741   36932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:30:32.387774   36932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 00:30:32.387851   36932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.ha-863044 san=[127.0.0.1 192.168.39.6 ha-863044 localhost minikube]
	I0815 00:30:32.651004   36932 provision.go:177] copyRemoteCerts
	I0815 00:30:32.651063   36932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:30:32.651085   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:30:32.653549   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.653855   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.653877   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.654066   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:30:32.654264   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.654429   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:30:32.654568   36932 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:30:32.743399   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 00:30:32.743464   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:30:32.767752   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 00:30:32.767807   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0815 00:30:32.790338   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 00:30:32.790408   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 00:30:32.812132   36932 provision.go:87] duration metric: took 430.790925ms to configureAuth
	I0815 00:30:32.812155   36932 buildroot.go:189] setting minikube options for container-runtime
	I0815 00:30:32.812423   36932 config.go:182] Loaded profile config "ha-863044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:30:32.812508   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:30:32.814896   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.815192   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:30:32.815217   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:30:32.815377   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:30:32.815554   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.815706   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:30:32.815828   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:30:32.815964   36932 main.go:141] libmachine: Using SSH client type: native
	I0815 00:30:32.816547   36932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:30:32.816588   36932 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:32:03.536768   36932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:32:03.536795   36932 machine.go:97] duration metric: took 1m31.524917765s to provisionDockerMachine
	I0815 00:32:03.536808   36932 start.go:293] postStartSetup for "ha-863044" (driver="kvm2")
	I0815 00:32:03.536817   36932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:32:03.536835   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:32:03.537246   36932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:32:03.537308   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:32:03.540326   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.540767   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:03.540789   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.540946   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:32:03.541122   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:32:03.541260   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:32:03.541425   36932 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:32:03.626432   36932 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:32:03.630404   36932 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 00:32:03.630426   36932 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 00:32:03.630492   36932 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 00:32:03.630584   36932 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 00:32:03.630596   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /etc/ssl/certs/202792.pem
	I0815 00:32:03.630678   36932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 00:32:03.639429   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:32:03.661531   36932 start.go:296] duration metric: took 124.713732ms for postStartSetup
	I0815 00:32:03.661561   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:32:03.661832   36932 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 00:32:03.661853   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:32:03.664330   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.664716   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:03.664741   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.664899   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:32:03.665061   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:32:03.665170   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:32:03.665331   36932 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	W0815 00:32:03.750303   36932 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0815 00:32:03.750342   36932 fix.go:56] duration metric: took 1m31.758877355s for fixHost
	I0815 00:32:03.750369   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:32:03.753013   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.753382   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:03.753423   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.753551   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:32:03.753735   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:32:03.753900   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:32:03.754030   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:32:03.754174   36932 main.go:141] libmachine: Using SSH client type: native
	I0815 00:32:03.754331   36932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0815 00:32:03.754341   36932 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 00:32:03.864995   36932 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723681923.822964831
	
	I0815 00:32:03.865016   36932 fix.go:216] guest clock: 1723681923.822964831
	I0815 00:32:03.865025   36932 fix.go:229] Guest: 2024-08-15 00:32:03.822964831 +0000 UTC Remote: 2024-08-15 00:32:03.750352164 +0000 UTC m=+91.881317148 (delta=72.612667ms)
	I0815 00:32:03.865058   36932 fix.go:200] guest clock delta is within tolerance: 72.612667ms
	I0815 00:32:03.865065   36932 start.go:83] releasing machines lock for "ha-863044", held for 1m31.873618392s
	I0815 00:32:03.865086   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:32:03.865324   36932 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:32:03.867802   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.868158   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:03.868178   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.868431   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:32:03.868909   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:32:03.869121   36932 main.go:141] libmachine: (ha-863044) Calling .DriverName
	I0815 00:32:03.869214   36932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:32:03.869267   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:32:03.869303   36932 ssh_runner.go:195] Run: cat /version.json
	I0815 00:32:03.869344   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHHostname
	I0815 00:32:03.872062   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.872332   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.872430   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:03.872445   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.872632   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:32:03.872782   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:03.872788   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:32:03.872832   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:03.872927   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:32:03.872973   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHPort
	I0815 00:32:03.873147   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHKeyPath
	I0815 00:32:03.873145   36932 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:32:03.873299   36932 main.go:141] libmachine: (ha-863044) Calling .GetSSHUsername
	I0815 00:32:03.873455   36932 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/ha-863044/id_rsa Username:docker}
	I0815 00:32:03.953938   36932 ssh_runner.go:195] Run: systemctl --version
	I0815 00:32:03.991871   36932 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:32:04.150179   36932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 00:32:04.157986   36932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 00:32:04.158038   36932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:32:04.167393   36932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 00:32:04.167408   36932 start.go:495] detecting cgroup driver to use...
	I0815 00:32:04.167461   36932 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:32:04.182320   36932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:32:04.195393   36932 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:32:04.195451   36932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:32:04.208315   36932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:32:04.221357   36932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:32:04.373383   36932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:32:04.508996   36932 docker.go:233] disabling docker service ...
	I0815 00:32:04.509055   36932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:32:04.524585   36932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:32:04.537086   36932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:32:04.675146   36932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:32:04.814822   36932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:32:04.828531   36932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:32:04.846650   36932 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:32:04.846700   36932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.856294   36932 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:32:04.856361   36932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.865713   36932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.875231   36932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.884442   36932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:32:04.893879   36932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.903356   36932 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.913565   36932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:32:04.923036   36932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:32:04.931669   36932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:32:04.940052   36932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:32:05.076029   36932 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:32:10.340622   36932 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.264557078s)
	I0815 00:32:10.340668   36932 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:32:10.340719   36932 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:32:10.345299   36932 start.go:563] Will wait 60s for crictl version
	I0815 00:32:10.345360   36932 ssh_runner.go:195] Run: which crictl
	I0815 00:32:10.348823   36932 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:32:10.384620   36932 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 00:32:10.384719   36932 ssh_runner.go:195] Run: crio --version
	I0815 00:32:10.411968   36932 ssh_runner.go:195] Run: crio --version
	I0815 00:32:10.439961   36932 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 00:32:10.441308   36932 main.go:141] libmachine: (ha-863044) Calling .GetIP
	I0815 00:32:10.443992   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:10.444317   36932 main.go:141] libmachine: (ha-863044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:35:5d", ip: ""} in network mk-ha-863044: {Iface:virbr1 ExpiryTime:2024-08-15 01:20:51 +0000 UTC Type:0 Mac:52:54:00:32:35:5d Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-863044 Clientid:01:52:54:00:32:35:5d}
	I0815 00:32:10.444344   36932 main.go:141] libmachine: (ha-863044) DBG | domain ha-863044 has defined IP address 192.168.39.6 and MAC address 52:54:00:32:35:5d in network mk-ha-863044
	I0815 00:32:10.444497   36932 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 00:32:10.448802   36932 kubeadm.go:883] updating cluster {Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:32:10.448925   36932 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:32:10.448962   36932 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:32:10.490857   36932 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:32:10.490877   36932 crio.go:433] Images already preloaded, skipping extraction
	I0815 00:32:10.490925   36932 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:32:10.525922   36932 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:32:10.525943   36932 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:32:10.525952   36932 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.31.0 crio true true} ...
	I0815 00:32:10.526072   36932 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-863044 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:32:10.526143   36932 ssh_runner.go:195] Run: crio config
	I0815 00:32:10.573554   36932 cni.go:84] Creating CNI manager for ""
	I0815 00:32:10.573579   36932 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 00:32:10.573593   36932 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:32:10.573616   36932 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-863044 NodeName:ha-863044 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:32:10.573732   36932 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-863044"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:32:10.573750   36932 kube-vip.go:115] generating kube-vip config ...
	I0815 00:32:10.573792   36932 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 00:32:10.584741   36932 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 00:32:10.584847   36932 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 00:32:10.584896   36932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:32:10.593992   36932 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:32:10.594077   36932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 00:32:10.602936   36932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0815 00:32:10.617867   36932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:32:10.632241   36932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0815 00:32:10.646720   36932 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 00:32:10.663467   36932 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 00:32:10.666825   36932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:32:10.811586   36932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:32:10.825456   36932 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044 for IP: 192.168.39.6
	I0815 00:32:10.825480   36932 certs.go:194] generating shared ca certs ...
	I0815 00:32:10.825499   36932 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:32:10.825664   36932 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 00:32:10.825714   36932 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 00:32:10.825727   36932 certs.go:256] generating profile certs ...
	I0815 00:32:10.825797   36932 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/client.key
	I0815 00:32:10.825822   36932 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.22de4ae5
	I0815 00:32:10.825835   36932 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.22de4ae5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.6 192.168.39.170 192.168.39.30 192.168.39.254]
	I0815 00:32:10.864688   36932 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.22de4ae5 ...
	I0815 00:32:10.864711   36932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.22de4ae5: {Name:mkdbcfe42d6893282928e12ceebcc8caaa6002b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:32:10.864882   36932 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.22de4ae5 ...
	I0815 00:32:10.864896   36932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.22de4ae5: {Name:mk824d7809eacb3e171a3c693b9456bc31a3f949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:32:10.864990   36932 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt.22de4ae5 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt
	I0815 00:32:10.865137   36932 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key.22de4ae5 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key
	I0815 00:32:10.865256   36932 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key
	I0815 00:32:10.865271   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 00:32:10.865283   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 00:32:10.865298   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 00:32:10.865317   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 00:32:10.865331   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 00:32:10.865344   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 00:32:10.865355   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 00:32:10.865365   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 00:32:10.865413   36932 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 00:32:10.865440   36932 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 00:32:10.865448   36932 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:32:10.865468   36932 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:32:10.865494   36932 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:32:10.865517   36932 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 00:32:10.865560   36932 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:32:10.865586   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /usr/share/ca-certificates/202792.pem
	I0815 00:32:10.865599   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:32:10.865611   36932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem -> /usr/share/ca-certificates/20279.pem
	I0815 00:32:10.866106   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:32:10.889903   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:32:10.911743   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:32:10.933524   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:32:10.955253   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 00:32:10.977052   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:32:10.997985   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:32:11.019368   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/ha-863044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 00:32:11.040875   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 00:32:11.062748   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:32:11.084255   36932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 00:32:11.105980   36932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:32:11.121661   36932 ssh_runner.go:195] Run: openssl version
	I0815 00:32:11.127336   36932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:32:11.137621   36932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:32:11.141739   36932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:32:11.141781   36932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:32:11.146832   36932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:32:11.155204   36932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 00:32:11.165175   36932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 00:32:11.169451   36932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 00:32:11.169488   36932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 00:32:11.174916   36932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 00:32:11.184114   36932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 00:32:11.194137   36932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 00:32:11.198060   36932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 00:32:11.198105   36932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 00:32:11.203376   36932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 00:32:11.211994   36932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:32:11.216929   36932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 00:32:11.221935   36932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 00:32:11.226918   36932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 00:32:11.231711   36932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 00:32:11.236723   36932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 00:32:11.241516   36932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 00:32:11.246424   36932 kubeadm.go:392] StartCluster: {Name:ha-863044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-863044 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:32:11.246540   36932 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 00:32:11.246572   36932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:32:11.281507   36932 cri.go:89] found id: "70696e8054023a651ead462ee31f548db94a8e40db8de76ffdf0e07ffc0839ea"
	I0815 00:32:11.281532   36932 cri.go:89] found id: "2837226a2ab92bec8f7f4be4c0f337b9b8b447569eb9df6783bda26a2c05653f"
	I0815 00:32:11.281538   36932 cri.go:89] found id: "c2e348136dca92210b1f249cc3d0bb46d0d1515f55819c3b11ba9e9f7cfe92f4"
	I0815 00:32:11.281543   36932 cri.go:89] found id: "8c05051caebc6b89e60379c49e52352cbd01e34ef4efe6f58a5441cb275e051d"
	I0815 00:32:11.281547   36932 cri.go:89] found id: "770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e"
	I0815 00:32:11.281551   36932 cri.go:89] found id: "a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787"
	I0815 00:32:11.281555   36932 cri.go:89] found id: "024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d"
	I0815 00:32:11.281559   36932 cri.go:89] found id: "5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a"
	I0815 00:32:11.281565   36932 cri.go:89] found id: "67611ae45f1e5eeda73fa4909e4ae85ff1de3ce19a810bf0cb7140feb5211759"
	I0815 00:32:11.281570   36932 cri.go:89] found id: "9038fb04ce7173166cb52181ceecd41cf82d733826ddf68ed5f5eb8894457506"
	I0815 00:32:11.281572   36932 cri.go:89] found id: "0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c"
	I0815 00:32:11.281575   36932 cri.go:89] found id: "acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6"
	I0815 00:32:11.281578   36932 cri.go:89] found id: "edee09d480aed745af29289f4e354836948af49f83b51332c70381c2589a7b70"
	I0815 00:32:11.281580   36932 cri.go:89] found id: ""
	I0815 00:32:11.281629   36932 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.881598019Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e02cfeea63eea6f82041a8e0e2a96cdea6d66e2dd5ed5f1f3d3e542ac853dcba,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-ck6d9,Uid:5655c46c-c830-4271-882b-c6230009cf90,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723681971042557857,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T00:23:53.717617613Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e0553bced5b326cf3fb45c6a36a15065834133bc3b5d7449f8c609e70d3e159,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-863044,Uid:ed71e11d0913ce366e9aa90e4e79fd10,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1723681951826146715,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed71e11d0913ce366e9aa90e4e79fd10,},Annotations:map[string]string{kubernetes.io/config.hash: ed71e11d0913ce366e9aa90e4e79fd10,kubernetes.io/config.seen: 2024-08-15T00:32:10.619930077Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3893e58de9eec882187cae2ab509f06e9c057ef336334f6c3a84614b10a3bc3f,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-bc2jh,Uid:77760785-a989-4c45-a8e0-e758db3a252b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723681937441425933,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08
-15T00:21:39.150953511Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f07d87d4b08c441f15163c53c7791067be05f69c669ad3953e26274fc256eb9,Metadata:&PodSandboxMetadata{Name:kindnet-ptbpb,Uid:b1fee332-fbc7-4b7b-818a-9ba398dce43e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723681937352560434,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T00:21:23.924456344Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c0102ecec2d13b28ef5fdff97b9c4bd6734a9ec5afdb2d4bb1232e96130469ec,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-jxpqd,Uid:72e46071-4563-4c8c-a269-c32c4d0fced3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723
681937348543115,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T00:21:39.135810069Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a7565569-2f8c-4393-b4f8-b8548d65f794,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723681937324861571,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{kubectl.kubernetes.io/last-appli
ed-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-15T00:21:39.150757434Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:60903fdbb380e131b5a890580220c3fbf0fc099fc095f6ca82d54b4c00214360,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-863044,Uid:d79d9d36b64f0d7c9696d4bf898501f1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1723681937321901176,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d79d9d36b64f0d7c9696d4bf898501f1,kubernetes.io/config.seen: 2024-08-15T00:21:19.832243999Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-863044,Uid:724fd3a4e6a5da4ff0fd467854a55959,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723681937315323152,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,tier: control-plane,},An
notations:map[string]string{kubernetes.io/config.hash: 724fd3a4e6a5da4ff0fd467854a55959,kubernetes.io/config.seen: 2024-08-15T00:21:19.832242536Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a021f35eb00b10aac8b23be25fa1856dd0bccd781f9bea329a0b0c4de5770beb,Metadata:&PodSandboxMetadata{Name:etcd-ha-863044,Uid:c3a9e53655db1290456ab14b86c00883,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723681937312912574,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.6:2379,kubernetes.io/config.hash: c3a9e53655db1290456ab14b86c00883,kubernetes.io/config.seen: 2024-08-15T00:21:19.832240311Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e
7344d97d3c6b7629,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-863044,Uid:86b417c56f3a2467bc7657bd68236d14,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723681937291184994,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.6:8443,kubernetes.io/config.hash: 86b417c56f3a2467bc7657bd68236d14,kubernetes.io/config.seen: 2024-08-15T00:21:19.832241437Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5a637d5a9638e7e9025579b62bd36a6f2f2a5d82648f27b851890b3397c6cf89,Metadata:&PodSandboxMetadata{Name:kube-proxy-758vr,Uid:0963208c-92ef-4625-8805-1c8ad8ae7b51,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723681937263808487,Labels:map[string]string{controll
er-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T00:21:23.914668358Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ad85141a-425a-4c00-a703-c1dfd6f0ff14 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.882401781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4c5e01c-ece7-467e-b269-2c7ffbdbc89a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.882457410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4c5e01c-ece7-467e-b269-2c7ffbdbc89a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.882690418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9242e96323c42cc5e35660dcbe3a5002d7d84faf37ddf6f152ba368e4b862709,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723682006890359382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7481189cf3801cc3c33a3eb3a11315b91f505f5119b9fded6d4fb163acec80fe,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723682003890648308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f92aa390854b4fe628e75613f1124beebe9adb2ded49dc3bc7b7f04ab6ad5cff,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681979899555064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5abc65d96ceafde2e73c26d5fe6548d3cd03610876fe573fe4b87e4c1eb74f,PodSandboxId:e02cfeea63eea6f82041a8e0e2a96cdea6d66e2dd5ed5f1f3d3e542ac853dcba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681971187232122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befdd0eb67c53f24058fc53346cd9b481e43da723f18c3ed0b5725c9c55368cc,PodSandboxId:0e0553bced5b326cf3fb45c6a36a15065834133bc3b5d7449f8c609e70d3e159,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723681951926987770,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed71e11d0913ce366e9aa90e4e79fd10,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d908dbe9fbecf3439554cdfd533fbd8edc65fd0fc302dafafd14e7584f88a73,PodSandboxId:5a637d5a9638e7e9025579b62bd36a6f2f2a5d82648f27b851890b3397c6cf89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723681937991517664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:7eb4acac741dc891d1b4d79b3df6a6ad843a76de4536ee11e93532fd02f87049,PodSandboxId:c0102ecec2d13b28ef5fdff97b9c4bd6734a9ec5afdb2d4bb1232e96130469ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681938128475052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a78aff1a6bd80d12b09da54ca90018fb8d7a3d1dc39978646568195d876a17f,PodSandboxId:4f07d87d4b08c441f15163c53c7791067be05f69c669ad3953e26274fc256eb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723681937984205907,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e036bda4ed25a5465915f56b707b3d01dc5d8fb9d6660380dd74454f867eba0,PodSandboxId:3893e58de9eec882187cae2ab509f06e9c057ef336334f6c3a84614b10a3bc3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681937842358865,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5b4659b9ea138e22072962382d618ca8b5f50e46861131601f65a468f1ec69,PodSandboxId:60903fdbb380e131b5a890580220c3fbf0fc099fc095f6ca82d54b4c00214360,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681937745561713,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0cce4b13205c74f2414baaf67c2da86c94f4e00b516df95cf6c2777cdccfed,PodSandboxId:a021f35eb00b10aac8b23be25fa1856dd0bccd781f9bea329a0b0c4de5770beb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681937692868128,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e5365
5db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4c5e01c-ece7-467e-b269-2c7ffbdbc89a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.900318805Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7fd67d3-f22f-40f1-87d4-c5cee08b4a89 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.900391773Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7fd67d3-f22f-40f1-87d4-c5cee08b4a89 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.901460185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb092df6-b005-4a1a-8a79-9e6ef12d3e4a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.902312860Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682275902287097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb092df6-b005-4a1a-8a79-9e6ef12d3e4a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.902823180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=367bcd09-1c47-401a-b0a6-fac1a81aa2ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.902991379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=367bcd09-1c47-401a-b0a6-fac1a81aa2ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.903625092Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9242e96323c42cc5e35660dcbe3a5002d7d84faf37ddf6f152ba368e4b862709,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723682006890359382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7481189cf3801cc3c33a3eb3a11315b91f505f5119b9fded6d4fb163acec80fe,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723682003890648308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f92aa390854b4fe628e75613f1124beebe9adb2ded49dc3bc7b7f04ab6ad5cff,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681979899555064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5abc65d96ceafde2e73c26d5fe6548d3cd03610876fe573fe4b87e4c1eb74f,PodSandboxId:e02cfeea63eea6f82041a8e0e2a96cdea6d66e2dd5ed5f1f3d3e542ac853dcba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681971187232122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0955874b3483b218b53b75431581f070ae0a22230f550a7d8b78775608b5558a,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723681970325487212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb896d8cead143c23b375f754c0df0f9b3613bf005b323b1efc46257a60549b4,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723681965892289607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befdd0eb67c53f24058fc53346cd9b481e43da723f18c3ed0b5725c9c55368cc,PodSandboxId:0e0553bced5b326cf3fb45c6a36a15065834133bc3b5d7449f8c609e70d3e159,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723681951926987770,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed71e11d0913ce366e9aa90e4e79fd10,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d908dbe9fbecf3439554cdfd533fbd8edc65fd0fc302dafafd14e7584f88a73,PodSandboxId:5a637d5a9638e7e9025579b62bd36a6f2f2a5d82648f27b851890b3397c6cf89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723681937991517664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:7eb4acac741dc891d1b4d79b3df6a6ad843a76de4536ee11e93532fd02f87049,PodSandboxId:c0102ecec2d13b28ef5fdff97b9c4bd6734a9ec5afdb2d4bb1232e96130469ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681938128475052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a78aff1a6bd80d12b09da54ca90018fb8d7a3d1dc39978646568195d876a17f,PodSandboxId:4f07d87d4b08c441f15163c53c7791067be05f69c669ad3953e26274fc256eb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723681937984205907,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e036bda4ed25a5465915f56b707b3d01dc5d8fb9d6660380dd74454f867eba0,PodSandboxId:3893e58de9eec882187cae2ab509f06e9c057ef336334f6c3a84614b10a3bc3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681937842358865,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5b4659b9ea138e22072962382d618ca8b5f50e46861131601f65a468f1ec69,PodSandboxId:60903fdbb380e131b5a890580220c3fbf0fc099fc095f6ca82d54b4c00214360,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681937745561713,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0cce4b13205c74f2414baaf67c2da86c94f4e00b516df95cf6c2777cdccfed,PodSandboxId:a021f35eb00b10aac8b23be25fa1856dd0bccd781f9bea329a0b0c4de5770beb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681937692868128,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db12904
56ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3177b3c6875f29527f81c74a5d3bc9b56b139cf1917c0375badeed94ad13304f,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723681937609506021,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723681438171808367,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299671795909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299673907846,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723681287926704791,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723681284364996588,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723681273657642816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723681273612784551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=367bcd09-1c47-401a-b0a6-fac1a81aa2ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.945635418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e43ebca-ef26-4f1d-9894-fbd008f8d81f name=/runtime.v1.RuntimeService/Version
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.945709050Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e43ebca-ef26-4f1d-9894-fbd008f8d81f name=/runtime.v1.RuntimeService/Version
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.946812919Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=511c16d1-4cd0-4f6f-8bc3-aacbe6734e58 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.947446734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682275947419099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=511c16d1-4cd0-4f6f-8bc3-aacbe6734e58 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.947978984Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41aeea45-128a-4534-a923-ad5a0a2ceb56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.948091478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41aeea45-128a-4534-a923-ad5a0a2ceb56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.948503897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9242e96323c42cc5e35660dcbe3a5002d7d84faf37ddf6f152ba368e4b862709,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723682006890359382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7481189cf3801cc3c33a3eb3a11315b91f505f5119b9fded6d4fb163acec80fe,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723682003890648308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f92aa390854b4fe628e75613f1124beebe9adb2ded49dc3bc7b7f04ab6ad5cff,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681979899555064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5abc65d96ceafde2e73c26d5fe6548d3cd03610876fe573fe4b87e4c1eb74f,PodSandboxId:e02cfeea63eea6f82041a8e0e2a96cdea6d66e2dd5ed5f1f3d3e542ac853dcba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681971187232122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0955874b3483b218b53b75431581f070ae0a22230f550a7d8b78775608b5558a,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723681970325487212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb896d8cead143c23b375f754c0df0f9b3613bf005b323b1efc46257a60549b4,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723681965892289607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befdd0eb67c53f24058fc53346cd9b481e43da723f18c3ed0b5725c9c55368cc,PodSandboxId:0e0553bced5b326cf3fb45c6a36a15065834133bc3b5d7449f8c609e70d3e159,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723681951926987770,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed71e11d0913ce366e9aa90e4e79fd10,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d908dbe9fbecf3439554cdfd533fbd8edc65fd0fc302dafafd14e7584f88a73,PodSandboxId:5a637d5a9638e7e9025579b62bd36a6f2f2a5d82648f27b851890b3397c6cf89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723681937991517664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:7eb4acac741dc891d1b4d79b3df6a6ad843a76de4536ee11e93532fd02f87049,PodSandboxId:c0102ecec2d13b28ef5fdff97b9c4bd6734a9ec5afdb2d4bb1232e96130469ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681938128475052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a78aff1a6bd80d12b09da54ca90018fb8d7a3d1dc39978646568195d876a17f,PodSandboxId:4f07d87d4b08c441f15163c53c7791067be05f69c669ad3953e26274fc256eb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723681937984205907,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e036bda4ed25a5465915f56b707b3d01dc5d8fb9d6660380dd74454f867eba0,PodSandboxId:3893e58de9eec882187cae2ab509f06e9c057ef336334f6c3a84614b10a3bc3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681937842358865,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5b4659b9ea138e22072962382d618ca8b5f50e46861131601f65a468f1ec69,PodSandboxId:60903fdbb380e131b5a890580220c3fbf0fc099fc095f6ca82d54b4c00214360,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681937745561713,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0cce4b13205c74f2414baaf67c2da86c94f4e00b516df95cf6c2777cdccfed,PodSandboxId:a021f35eb00b10aac8b23be25fa1856dd0bccd781f9bea329a0b0c4de5770beb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681937692868128,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db12904
56ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3177b3c6875f29527f81c74a5d3bc9b56b139cf1917c0375badeed94ad13304f,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723681937609506021,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723681438171808367,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299671795909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299673907846,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723681287926704791,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723681284364996588,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723681273657642816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723681273612784551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41aeea45-128a-4534-a923-ad5a0a2ceb56 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.989705187Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ad52899-cab9-440f-a298-25d9eae36ba9 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.989794096Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ad52899-cab9-440f-a298-25d9eae36ba9 name=/runtime.v1.RuntimeService/Version
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.990875815Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2de239e5-1cb0-4623-901e-3794a5f7f244 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.991493776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682275991468687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2de239e5-1cb0-4623-901e-3794a5f7f244 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.992008049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a402ad28-78d1-4bcf-be62-ef1dd268d935 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.992119723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a402ad28-78d1-4bcf-be62-ef1dd268d935 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 00:37:55 ha-863044 crio[3583]: time="2024-08-15 00:37:55.992687623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9242e96323c42cc5e35660dcbe3a5002d7d84faf37ddf6f152ba368e4b862709,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723682006890359382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7481189cf3801cc3c33a3eb3a11315b91f505f5119b9fded6d4fb163acec80fe,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723682003890648308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f92aa390854b4fe628e75613f1124beebe9adb2ded49dc3bc7b7f04ab6ad5cff,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723681979899555064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5abc65d96ceafde2e73c26d5fe6548d3cd03610876fe573fe4b87e4c1eb74f,PodSandboxId:e02cfeea63eea6f82041a8e0e2a96cdea6d66e2dd5ed5f1f3d3e542ac853dcba,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723681971187232122,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0955874b3483b218b53b75431581f070ae0a22230f550a7d8b78775608b5558a,PodSandboxId:d28dd79bb029e02e840393c18288204cc72f9141e0f75ae45034aa86e072105f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723681970325487212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 724fd3a4e6a5da4ff0fd467854a55959,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb896d8cead143c23b375f754c0df0f9b3613bf005b323b1efc46257a60549b4,PodSandboxId:8c8c0152a76d429c2c34402e923016b0065ada905d9f52bca925011b0b4629e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723681965892289607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7565569-2f8c-4393-b4f8-b8548d65f794,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befdd0eb67c53f24058fc53346cd9b481e43da723f18c3ed0b5725c9c55368cc,PodSandboxId:0e0553bced5b326cf3fb45c6a36a15065834133bc3b5d7449f8c609e70d3e159,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723681951926987770,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed71e11d0913ce366e9aa90e4e79fd10,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d908dbe9fbecf3439554cdfd533fbd8edc65fd0fc302dafafd14e7584f88a73,PodSandboxId:5a637d5a9638e7e9025579b62bd36a6f2f2a5d82648f27b851890b3397c6cf89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723681937991517664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrac
ePeriod: 30,},},&Container{Id:7eb4acac741dc891d1b4d79b3df6a6ad843a76de4536ee11e93532fd02f87049,PodSandboxId:c0102ecec2d13b28ef5fdff97b9c4bd6734a9ec5afdb2d4bb1232e96130469ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681938128475052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a78aff1a6bd80d12b09da54ca90018fb8d7a3d1dc39978646568195d876a17f,PodSandboxId:4f07d87d4b08c441f15163c53c7791067be05f69c669ad3953e26274fc256eb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723681937984205907,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e036bda4ed25a5465915f56b707b3d01dc5d8fb9d6660380dd74454f867eba0,PodSandboxId:3893e58de9eec882187cae2ab509f06e9c057ef336334f6c3a84614b10a3bc3f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723681937842358865,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\"
:53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5b4659b9ea138e22072962382d618ca8b5f50e46861131601f65a468f1ec69,PodSandboxId:60903fdbb380e131b5a890580220c3fbf0fc099fc095f6ca82d54b4c00214360,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723681937745561713,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc0cce4b13205c74f2414baaf67c2da86c94f4e00b516df95cf6c2777cdccfed,PodSandboxId:a021f35eb00b10aac8b23be25fa1856dd0bccd781f9bea329a0b0c4de5770beb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723681937692868128,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db12904
56ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3177b3c6875f29527f81c74a5d3bc9b56b139cf1917c0375badeed94ad13304f,PodSandboxId:b3ae4347b75ec5cc85dc3d0e9e23be5ecc417288537f900e7344d97d3c6b7629,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723681937609506021,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86b417c56f3a2467bc7657bd68236d14,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a3e7281c498f578c02d66d427ebaf7b053c1d5376c5e66a887a652022ad2986,PodSandboxId:e9555e65cebe7117a110e9f9a10fc7aefac085c21dd6201a3aa96467ed24a671,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723681438171808367,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ck6d9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5655c46c-c830-4271-882b-c6230009cf90,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787,PodSandboxId:4feecb19b205ad6e6663f95a5965cb9ff4f8bf656bb909f8365ee3ba0863f62a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299671795909,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jxpqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72e46071-4563-4c8c-a269-c32c4d0fced3,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e,PodSandboxId:1334a86739ccfbeaee8a921359d6ae52ed85900e23a4a2cdf540704f4d75bd73,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723681299673907846,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-bc2jh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77760785-a989-4c45-a8e0-e758db3a252b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d,PodSandboxId:c2b2f0c2bdc2e34bc08a1d533db4120c094d43eece4cc9e3ec69ae130433b41f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723681287926704791,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ptbpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fee332-fbc7-4b7b-818a-9ba398dce43e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a,PodSandboxId:a6a3b389836fccd88b90e85ac355000f162fccb37f4dfdfb925fe99cd4744782,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723681284364996588,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0963208c-92ef-4625-8805-1c8ad8ae7b51,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c,PodSandboxId:ba41c766be2d5d0debd859d77ae8e36b6b01fdf16b5d57e4953b6e82440fb8fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723681273657642816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79d9d36b64f0d7c9696d4bf898501f1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6,PodSandboxId:1825ea5e56cf4bc50df1d53b7a92260ca0ee5ac0d4d4886ffa75436eaf4f22e6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723681273612784551,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-863044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a9e53655db1290456ab14b86c00883,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a402ad28-78d1-4bcf-be62-ef1dd268d935 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9242e96323c42       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   8c8c0152a76d4       storage-provisioner
	7481189cf3801       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   3                   d28dd79bb029e       kube-controller-manager-ha-863044
	f92aa390854b4       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   b3ae4347b75ec       kube-apiserver-ha-863044
	9e5abc65d96ce       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   e02cfeea63eea       busybox-7dff88458-ck6d9
	0955874b3483b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   2                   d28dd79bb029e       kube-controller-manager-ha-863044
	eb896d8cead14       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   8c8c0152a76d4       storage-provisioner
	befdd0eb67c53       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   0e0553bced5b3       kube-vip-ha-863044
	7eb4acac741dc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   c0102ecec2d13       coredns-6f6b679f8f-jxpqd
	1d908dbe9fbec       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   5a637d5a9638e       kube-proxy-758vr
	5a78aff1a6bd8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   4f07d87d4b08c       kindnet-ptbpb
	8e036bda4ed25       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   3893e58de9eec       coredns-6f6b679f8f-bc2jh
	af5b4659b9ea1       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   60903fdbb380e       kube-scheduler-ha-863044
	dc0cce4b13205       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   a021f35eb00b1       etcd-ha-863044
	3177b3c6875f2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   b3ae4347b75ec       kube-apiserver-ha-863044
	4a3e7281c498f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   e9555e65cebe7       busybox-7dff88458-ck6d9
	770157c751290       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   1334a86739ccf       coredns-6f6b679f8f-bc2jh
	a6304cc907b70       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   4feecb19b205a       coredns-6f6b679f8f-jxpqd
	024782bd78877       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    16 minutes ago      Exited              kindnet-cni               0                   c2b2f0c2bdc2e       kindnet-ptbpb
	5d1d7d03658b7       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      16 minutes ago      Exited              kube-proxy                0                   a6a3b389836fc       kube-proxy-758vr
	0624b371b469a       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      16 minutes ago      Exited              kube-scheduler            0                   ba41c766be2d5       kube-scheduler-ha-863044
	acf9154524991       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   1825ea5e56cf4       etcd-ha-863044
	
	
	==> coredns [770157c75129098e142b07f70f7bdd8d80d42e9c4c5260112e0dc3b0133a399e] <==
	[INFO] 10.244.1.2:32830 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116945s
	[INFO] 10.244.1.2:51392 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008307s
	[INFO] 10.244.0.4:42010 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00031726s
	[INFO] 10.244.2.2:44915 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143127s
	[INFO] 10.244.2.2:37741 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170015s
	[INFO] 10.244.2.2:58647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000130581s
	[INFO] 10.244.1.2:49418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247229s
	[INFO] 10.244.1.2:44042 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127451s
	[INFO] 10.244.1.2:41801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00015235s
	[INFO] 10.244.1.2:51078 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176731s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1544927018]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 00:30:18.089) (total time: 12825ms):
	Trace[1544927018]: ---"Objects listed" error:Unauthorized 12825ms (00:30:30.915)
	Trace[1544927018]: [12.825751377s] [12.825751377s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[324365653]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 00:30:18.419) (total time: 12495ms):
	Trace[324365653]: ---"Objects listed" error:Unauthorized 12495ms (00:30:30.915)
	Trace[324365653]: [12.495841363s] [12.495841363s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7eb4acac741dc891d1b4d79b3df6a6ad843a76de4536ee11e93532fd02f87049] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50096->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50096->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41504->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[836955325]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 00:32:32.598) (total time: 10484ms):
	Trace[836955325]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41504->10.96.0.1:443: read: connection reset by peer 10484ms (00:32:43.083)
	Trace[836955325]: [10.484939563s] [10.484939563s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41504->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8e036bda4ed25a5465915f56b707b3d01dc5d8fb9d6660380dd74454f867eba0] <==
	Trace[1649646237]: [10.001212572s] [10.001212572s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[285531728]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 00:32:26.264) (total time: 10001ms):
	Trace[285531728]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:32:36.265)
	Trace[285531728]: [10.001183768s] [10.001183768s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:47528->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:47528->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a6304cc907b70d5e30c3218360771f6d65f0867b903d9249955b4403f980b787] <==
	[INFO] 10.244.1.2:32926 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000109486s
	[INFO] 10.244.0.4:35014 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015446s
	[INFO] 10.244.0.4:46414 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148102s
	[INFO] 10.244.2.2:51282 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002016555s
	[INFO] 10.244.2.2:43091 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001529953s
	[INFO] 10.244.2.2:42863 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00043817s
	[INFO] 10.244.2.2:39074 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067798s
	[INFO] 10.244.1.2:52314 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192016s
	[INFO] 10.244.1.2:58476 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001116995s
	[INFO] 10.244.1.2:39360 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.001839118s
	[INFO] 10.244.0.4:51814 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012471s
	[INFO] 10.244.0.4:40547 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083981s
	[INFO] 10.244.2.2:34181 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015996s
	[INFO] 10.244.2.2:56520 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000727856s
	[INFO] 10.244.2.2:38242 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103367s
	[INFO] 10.244.1.2:50032 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110327s
	[INFO] 10.244.0.4:55523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123577s
	[INFO] 10.244.0.4:42586 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010348s
	[INFO] 10.244.0.4:36103 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000184736s
	[INFO] 10.244.2.2:57332 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000163958s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1890&timeout=7m27s&timeoutSeconds=447&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1888&timeout=7m22s&timeoutSeconds=442&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1891&timeout=7m50s&timeoutSeconds=470&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-863044
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_21_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:21:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:37:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:32:57 +0000   Thu, 15 Aug 2024 00:21:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:32:57 +0000   Thu, 15 Aug 2024 00:21:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:32:57 +0000   Thu, 15 Aug 2024 00:21:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:32:57 +0000   Thu, 15 Aug 2024 00:21:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    ha-863044
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e33f2588c28f4daf846273c46c5ec17c
	  System UUID:                e33f2588-c28f-4daf-8462-73c46c5ec17c
	  Boot ID:                    262603d0-6087-4822-8e6c-89d7a28279b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ck6d9              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-6f6b679f8f-bc2jh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-6f6b679f8f-jxpqd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-863044                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-ptbpb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-863044             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-863044    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-758vr                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-863044             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-863044                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 4m56s                 kube-proxy       
	  Normal   Starting                 16m                   kube-proxy       
	  Normal   Starting                 16m                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)     kubelet          Node ha-863044 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  16m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)     kubelet          Node ha-863044 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)     kubelet          Node ha-863044 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  16m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m                   kubelet          Node ha-863044 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  16m                   kubelet          Node ha-863044 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                   kubelet          Node ha-863044 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                   node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Normal   NodeReady                16m                   kubelet          Node ha-863044 status is now: NodeReady
	  Normal   RegisteredNode           15m                   node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Normal   RegisteredNode           14m                   node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Warning  ContainerGCFailed        6m37s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             6m1s (x3 over 6m50s)  kubelet          Node ha-863044 status is now: NodeNotReady
	  Normal   RegisteredNode           5m2s                  node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Normal   RegisteredNode           4m30s                 node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	  Normal   RegisteredNode           3m18s                 node-controller  Node ha-863044 event: Registered Node ha-863044 in Controller
	
	
	Name:               ha-863044-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_22_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:22:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:37:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:33:41 +0000   Thu, 15 Aug 2024 00:33:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:33:41 +0000   Thu, 15 Aug 2024 00:33:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:33:41 +0000   Thu, 15 Aug 2024 00:33:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:33:41 +0000   Thu, 15 Aug 2024 00:33:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    ha-863044-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 877b666314684accbfd657286f8d0095
	  System UUID:                877b6663-1468-4acc-bfd6-57286f8d0095
	  Boot ID:                    608ac5ca-dc01-4492-ae62-64b381450129
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-zmr7b                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-863044-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-xpnzd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-863044-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-863044-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-6l4gp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-863044-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-863044-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m38s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-863044-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-863044-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-863044-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-863044-m02 status is now: NodeNotReady
	  Normal  Starting                 5m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-863044-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-863044-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-863044-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m2s                   node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  RegisteredNode           4m30s                  node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-863044-m02 event: Registered Node ha-863044-m02 in Controller
	
	
	Name:               ha-863044-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-863044-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=ha-863044
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_24_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:24:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-863044-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:35:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 00:35:09 +0000   Thu, 15 Aug 2024 00:36:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 00:35:09 +0000   Thu, 15 Aug 2024 00:36:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 00:35:09 +0000   Thu, 15 Aug 2024 00:36:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 00:35:09 +0000   Thu, 15 Aug 2024 00:36:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    ha-863044-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 29de5816079a4aa6bb73571d88da2d1b
	  System UUID:                29de5816-079a-4aa6-bb73-571d88da2d1b
	  Boot ID:                    ab22bda9-429f-4e7b-925a-d953cf540ee2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-gqvd2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-7r4h2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-72j9n           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-863044-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-863044-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-863044-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-863044-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m2s                   node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal   RegisteredNode           4m30s                  node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-863044-m04 event: Registered Node ha-863044-m04 in Controller
	  Warning  Rebooted                 2m47s (x2 over 2m47s)  kubelet          Node ha-863044-m04 has been rebooted, boot id: ab22bda9-429f-4e7b-925a-d953cf540ee2
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m47s (x3 over 2m47s)  kubelet          Node ha-863044-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x3 over 2m47s)  kubelet          Node ha-863044-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x3 over 2m47s)  kubelet          Node ha-863044-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m47s                  kubelet          Node ha-863044-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m47s                  kubelet          Node ha-863044-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 4m22s)   node-controller  Node ha-863044-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Aug15 00:21] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.061023] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060159] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.174439] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.118153] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.259429] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +3.778855] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.212652] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.060600] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.151808] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.077604] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.187372] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.703882] kauditd_printk_skb: 23 callbacks suppressed
	[Aug15 00:22] kauditd_printk_skb: 26 callbacks suppressed
	[Aug15 00:32] systemd-fstab-generator[3502]: Ignoring "noauto" option for root device
	[  +0.145711] systemd-fstab-generator[3514]: Ignoring "noauto" option for root device
	[  +0.161289] systemd-fstab-generator[3528]: Ignoring "noauto" option for root device
	[  +0.140986] systemd-fstab-generator[3540]: Ignoring "noauto" option for root device
	[  +0.258789] systemd-fstab-generator[3568]: Ignoring "noauto" option for root device
	[  +5.731086] systemd-fstab-generator[3672]: Ignoring "noauto" option for root device
	[  +0.087193] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.541912] kauditd_printk_skb: 12 callbacks suppressed
	[ +14.577284] kauditd_printk_skb: 86 callbacks suppressed
	[ +19.235744] kauditd_printk_skb: 1 callbacks suppressed
	[Aug15 00:33] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [acf9154524991d8a1e11acd3e502f3d84b878e711ad248ea36cbdd325252ece6] <==
	{"level":"warn","ts":"2024-08-15T00:30:32.946689Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:30:31.938008Z","time spent":"1.008671699s","remote":"127.0.0.1:58916","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" "}
	2024/08/15 00:30:32 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-15T00:30:32.985380Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.6:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T00:30:32.985420Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.6:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T00:30:32.985553Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6f26d2d338759d80","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-15T00:30:32.985678Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.985707Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.985737Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.985774Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.985885Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.985968Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.985999Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f7f22545c69cf70a"}
	{"level":"info","ts":"2024-08-15T00:30:32.986007Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.986018Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.986125Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.986293Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.986338Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.986419Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.986482Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:30:32.989601Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"warn","ts":"2024-08-15T00:30:32.989699Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.090958258s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-15T00:30:32.989735Z","caller":"traceutil/trace.go:171","msg":"trace[573312896] range","detail":"{range_begin:; range_end:; }","duration":"9.091008675s","start":"2024-08-15T00:30:23.898718Z","end":"2024-08-15T00:30:32.989727Z","steps":["trace[573312896] 'agreement among raft nodes before linearized reading'  (duration: 9.090957342s)"],"step_count":1}
	{"level":"error","ts":"2024-08-15T00:30:32.989790Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-15T00:30:32.990495Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-08-15T00:30:32.990527Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-863044","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"]}
	
	
	==> etcd [dc0cce4b13205c74f2414baaf67c2da86c94f4e00b516df95cf6c2777cdccfed] <==
	{"level":"info","ts":"2024-08-15T00:34:31.623726Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:34:31.649773Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6f26d2d338759d80","to":"fd5a13d6251910c6","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-15T00:34:31.649894Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:34:31.654973Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6f26d2d338759d80","to":"fd5a13d6251910c6","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-15T00:34:31.655098Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"warn","ts":"2024-08-15T00:34:33.571608Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"fd5a13d6251910c6","rtt":"0s","error":"dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:34:33.571738Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"fd5a13d6251910c6","rtt":"0s","error":"dial tcp 192.168.39.30:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T00:35:23.001673Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.30:43668","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-15T00:35:23.013983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 switched to configuration voters=(8009320791952170368 17866383653347325706)"}
	{"level":"info","ts":"2024-08-15T00:35:23.015956Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","removed-remote-peer-id":"fd5a13d6251910c6","removed-remote-peer-urls":["https://192.168.39.30:2380"]}
	{"level":"info","ts":"2024-08-15T00:35:23.016118Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"warn","ts":"2024-08-15T00:35:23.016376Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:35:23.016440Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"warn","ts":"2024-08-15T00:35:23.016969Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:35:23.017222Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:35:23.017387Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"warn","ts":"2024-08-15T00:35:23.017655Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6","error":"context canceled"}
	{"level":"warn","ts":"2024-08-15T00:35:23.017767Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"fd5a13d6251910c6","error":"failed to read fd5a13d6251910c6 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-15T00:35:23.017842Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"warn","ts":"2024-08-15T00:35:23.018013Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6","error":"context canceled"}
	{"level":"info","ts":"2024-08-15T00:35:23.018182Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6f26d2d338759d80","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:35:23.018227Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"fd5a13d6251910c6"}
	{"level":"info","ts":"2024-08-15T00:35:23.018267Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6f26d2d338759d80","removed-remote-peer-id":"fd5a13d6251910c6"}
	{"level":"warn","ts":"2024-08-15T00:35:23.027939Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6f26d2d338759d80","remote-peer-id-stream-handler":"6f26d2d338759d80","remote-peer-id-from":"fd5a13d6251910c6"}
	{"level":"warn","ts":"2024-08-15T00:35:23.031185Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6f26d2d338759d80","remote-peer-id-stream-handler":"6f26d2d338759d80","remote-peer-id-from":"fd5a13d6251910c6"}
	
	
	==> kernel <==
	 00:37:56 up 17 min,  0 users,  load average: 0.23, 0.35, 0.24
	Linux ha-863044 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [024782bd788774de9ace74de1522ee9a8c3f199e3430fe65581bd9df3ad3aa5d] <==
	I0815 00:30:08.919661       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:30:08.919802       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:30:08.920000       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:30:08.920099       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:30:08.920226       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:30:08.920269       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:30:08.920372       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:30:08.920413       1 main.go:299] handling current node
	I0815 00:30:18.924312       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:30:18.924407       1 main.go:299] handling current node
	I0815 00:30:18.924439       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:30:18.924470       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:30:18.924635       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:30:18.924657       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:30:18.924726       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:30:18.924745       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	E0815 00:30:21.195607       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1883&timeout=6m1s&timeoutSeconds=361&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0815 00:30:28.920419       1 main.go:295] Handling node with IPs: map[192.168.39.30:{}]
	I0815 00:30:28.920467       1 main.go:322] Node ha-863044-m03 has CIDR [10.244.2.0/24] 
	I0815 00:30:28.920642       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:30:28.920663       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:30:28.920735       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:30:28.920751       1 main.go:299] handling current node
	I0815 00:30:28.920783       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:30:28.920797       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [5a78aff1a6bd80d12b09da54ca90018fb8d7a3d1dc39978646568195d876a17f] <==
	I0815 00:37:08.957558       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:37:18.955809       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:37:18.955943       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:37:18.956313       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:37:18.956380       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:37:18.956508       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:37:18.956548       1 main.go:299] handling current node
	I0815 00:37:28.963132       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:37:28.963250       1 main.go:299] handling current node
	I0815 00:37:28.963283       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:37:28.963301       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:37:28.963457       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:37:28.963495       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:37:38.955716       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:37:38.955850       1 main.go:299] handling current node
	I0815 00:37:38.955884       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:37:38.955902       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:37:38.956130       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:37:38.956161       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	I0815 00:37:48.954505       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0815 00:37:48.954658       1 main.go:299] handling current node
	I0815 00:37:48.954694       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0815 00:37:48.954713       1 main.go:322] Node ha-863044-m02 has CIDR [10.244.1.0/24] 
	I0815 00:37:48.954866       1 main.go:295] Handling node with IPs: map[192.168.39.247:{}]
	I0815 00:37:48.954886       1 main.go:322] Node ha-863044-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3177b3c6875f29527f81c74a5d3bc9b56b139cf1917c0375badeed94ad13304f] <==
	I0815 00:32:18.156942       1 options.go:228] external host was not specified, using 192.168.39.6
	I0815 00:32:18.175563       1 server.go:142] Version: v1.31.0
	I0815 00:32:18.175609       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:32:19.035925       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 00:32:19.046900       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 00:32:19.051664       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 00:32:19.051694       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 00:32:19.051934       1 instance.go:232] Using reconciler: lease
	W0815 00:32:39.033585       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0815 00:32:39.033834       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0815 00:32:39.052887       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0815 00:32:39.052992       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f92aa390854b4fe628e75613f1124beebe9adb2ded49dc3bc7b7f04ab6ad5cff] <==
	I0815 00:33:01.568813       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0815 00:33:01.568883       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0815 00:33:01.643246       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 00:33:01.644220       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 00:33:01.646283       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 00:33:01.646323       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 00:33:01.646512       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 00:33:01.646562       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 00:33:01.646568       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 00:33:01.644349       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 00:33:01.644361       1 aggregator.go:171] initial CRD sync complete...
	I0815 00:33:01.647821       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 00:33:01.647829       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 00:33:01.647833       1 cache.go:39] Caches are synced for autoregister controller
	I0815 00:33:01.653550       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0815 00:33:01.655203       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.170 192.168.39.30]
	I0815 00:33:01.666670       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 00:33:01.677433       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 00:33:01.677470       1 policy_source.go:224] refreshing policies
	I0815 00:33:01.727950       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 00:33:01.758341       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 00:33:01.767726       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0815 00:33:01.771350       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0815 00:33:02.546158       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 00:33:02.884551       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.170 192.168.39.30 192.168.39.6]
	
	
	==> kube-controller-manager [0955874b3483b218b53b75431581f070ae0a22230f550a7d8b78775608b5558a] <==
	I0815 00:32:51.077774       1 serving.go:386] Generated self-signed cert in-memory
	I0815 00:32:51.714851       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 00:32:51.714972       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:32:51.717274       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 00:32:51.717410       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 00:32:51.717876       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 00:32:51.717983       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0815 00:33:01.724616       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[-]poststarthook/bootstrap-controller failed: reason withheld\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [7481189cf3801cc3c33a3eb3a11315b91f505f5119b9fded6d4fb163acec80fe] <==
	I0815 00:35:19.934668       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.368µs"
	I0815 00:35:19.940812       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.577µs"
	I0815 00:35:21.805519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.548µs"
	I0815 00:35:22.165862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.431µs"
	I0815 00:35:22.170581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.616µs"
	I0815 00:35:23.788422       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.19909ms"
	I0815 00:35:23.788935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.596µs"
	I0815 00:35:33.891651       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-863044-m04"
	I0815 00:35:33.891945       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m03"
	E0815 00:35:46.437168       1 gc_controller.go:151] "Failed to get node" err="node \"ha-863044-m03\" not found" logger="pod-garbage-collector-controller" node="ha-863044-m03"
	E0815 00:35:46.437237       1 gc_controller.go:151] "Failed to get node" err="node \"ha-863044-m03\" not found" logger="pod-garbage-collector-controller" node="ha-863044-m03"
	E0815 00:35:46.437246       1 gc_controller.go:151] "Failed to get node" err="node \"ha-863044-m03\" not found" logger="pod-garbage-collector-controller" node="ha-863044-m03"
	E0815 00:35:46.437252       1 gc_controller.go:151] "Failed to get node" err="node \"ha-863044-m03\" not found" logger="pod-garbage-collector-controller" node="ha-863044-m03"
	E0815 00:35:46.437257       1 gc_controller.go:151] "Failed to get node" err="node \"ha-863044-m03\" not found" logger="pod-garbage-collector-controller" node="ha-863044-m03"
	E0815 00:36:06.437743       1 gc_controller.go:151] "Failed to get node" err="node \"ha-863044-m03\" not found" logger="pod-garbage-collector-controller" node="ha-863044-m03"
	E0815 00:36:06.437817       1 gc_controller.go:151] "Failed to get node" err="node \"ha-863044-m03\" not found" logger="pod-garbage-collector-controller" node="ha-863044-m03"
	E0815 00:36:06.437826       1 gc_controller.go:151] "Failed to get node" err="node \"ha-863044-m03\" not found" logger="pod-garbage-collector-controller" node="ha-863044-m03"
	E0815 00:36:06.437831       1 gc_controller.go:151] "Failed to get node" err="node \"ha-863044-m03\" not found" logger="pod-garbage-collector-controller" node="ha-863044-m03"
	E0815 00:36:06.437836       1 gc_controller.go:151] "Failed to get node" err="node \"ha-863044-m03\" not found" logger="pod-garbage-collector-controller" node="ha-863044-m03"
	I0815 00:36:09.716470       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:36:09.741688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:36:09.787758       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.231735ms"
	I0815 00:36:09.788792       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.745µs"
	I0815 00:36:11.470089       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	I0815 00:36:14.811657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-863044-m04"
	
	
	==> kube-proxy [1d908dbe9fbecf3439554cdfd533fbd8edc65fd0fc302dafafd14e7584f88a73] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 00:32:19.531764       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863044\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 00:32:22.603932       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863044\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 00:32:25.675509       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863044\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 00:32:31.819512       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863044\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 00:32:41.035666       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-863044\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0815 00:33:00.249005       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	E0815 00:33:00.249208       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:33:00.360674       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 00:33:00.360764       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 00:33:00.360816       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:33:00.363127       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:33:00.363450       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:33:00.363636       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:33:00.365406       1 config.go:197] "Starting service config controller"
	I0815 00:33:00.365479       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:33:00.365532       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:33:00.365560       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:33:00.366339       1 config.go:326] "Starting node config controller"
	I0815 00:33:00.366420       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:33:00.466119       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:33:00.466259       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:33:00.468412       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5d1d7d03658b79defd00fbf68ae078b4c14b7c50cc336523e9e737a585e2740a] <==
	E0815 00:29:18.219423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:18.219373       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:18.219535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:21.355526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:21.355648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:24.427378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:24.427491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:27.500014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:27.500266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:27.500415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:27.500475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:36.717022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:36.717559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:39.790180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:39.790300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:39.790504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:39.790605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:29:58.220892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:29:58.220973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:30:01.292407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:30:01.292667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:30:04.364299       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:30:04.364581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-863044&resourceVersion=1793\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 00:30:32.011518       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 00:30:32.011647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1874\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [0624b371b469a01573685dff402109d96211dc7127c1cf3c5c0a4e1d5356040c] <==
	I0815 00:24:34.809950       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hhvjh" node="ha-863044-m04"
	E0815 00:24:34.844902       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5ptdm\": pod kube-proxy-5ptdm is already assigned to node \"ha-863044-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5ptdm" node="ha-863044-m04"
	E0815 00:24:34.845683       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5ac2ee81-5268-49b4-80fc-2b9950b30cad(kube-system/kube-proxy-5ptdm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5ptdm"
	E0815 00:24:34.845833       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5ptdm\": pod kube-proxy-5ptdm is already assigned to node \"ha-863044-m04\"" pod="kube-system/kube-proxy-5ptdm"
	I0815 00:24:34.845899       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5ptdm" node="ha-863044-m04"
	E0815 00:30:09.525295       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0815 00:30:09.525552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0815 00:30:09.525659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0815 00:30:19.575178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0815 00:30:19.839730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0815 00:30:20.899645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0815 00:30:22.154277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	W0815 00:30:23.073427       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:30:23.073520       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	E0815 00:30:24.085015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0815 00:30:25.033302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0815 00:30:25.557857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	W0815 00:30:27.064441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 00:30:27.064490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0815 00:30:27.532107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0815 00:30:28.624966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0815 00:30:30.306291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0815 00:30:31.077778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0815 00:30:31.404176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0815 00:30:32.921230       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [af5b4659b9ea138e22072962382d618ca8b5f50e46861131601f65a468f1ec69] <==
	W0815 00:32:53.505975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.6:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:53.506129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.6:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:54.338562       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.6:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:54.338725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.6:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:54.729817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.6:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:54.729947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.6:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:55.018602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.6:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:55.018676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.6:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:55.498187       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.6:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:55.498326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.6:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:57.275850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.6:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:57.275894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.6:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:57.519491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.6:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:57.519607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.6:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:57.929017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.6:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:57.929657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.6:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:58.667574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.6:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:58.667694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.6:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:58.805689       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.6:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:58.805748       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.6:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:58.979816       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.6:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:58.979931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.6:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	W0815 00:32:59.042756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.6:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.6:8443: connect: connection refused
	E0815 00:32:59.042878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.6:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.6:8443: connect: connection refused" logger="UnhandledError"
	I0815 00:33:15.768976       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 00:36:20 ha-863044 kubelet[1326]: E0815 00:36:20.133458    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682180133113387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:36:20 ha-863044 kubelet[1326]: E0815 00:36:20.133534    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682180133113387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:36:30 ha-863044 kubelet[1326]: E0815 00:36:30.135960    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682190135339653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:36:30 ha-863044 kubelet[1326]: E0815 00:36:30.136293    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682190135339653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:36:40 ha-863044 kubelet[1326]: E0815 00:36:40.138522    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682200137995872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:36:40 ha-863044 kubelet[1326]: E0815 00:36:40.138703    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682200137995872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:36:50 ha-863044 kubelet[1326]: E0815 00:36:50.140401    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682210139965316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:36:50 ha-863044 kubelet[1326]: E0815 00:36:50.140689    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682210139965316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:37:00 ha-863044 kubelet[1326]: E0815 00:37:00.142894    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682220142568066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:37:00 ha-863044 kubelet[1326]: E0815 00:37:00.143371    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682220142568066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:37:10 ha-863044 kubelet[1326]: E0815 00:37:10.145312    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682230144798001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:37:10 ha-863044 kubelet[1326]: E0815 00:37:10.145362    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682230144798001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:37:19 ha-863044 kubelet[1326]: E0815 00:37:19.907371    1326 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 00:37:19 ha-863044 kubelet[1326]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 00:37:19 ha-863044 kubelet[1326]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 00:37:19 ha-863044 kubelet[1326]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 00:37:19 ha-863044 kubelet[1326]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 00:37:20 ha-863044 kubelet[1326]: E0815 00:37:20.147288    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682240146923598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:37:20 ha-863044 kubelet[1326]: E0815 00:37:20.147311    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682240146923598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:37:30 ha-863044 kubelet[1326]: E0815 00:37:30.149397    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682250148925326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:37:30 ha-863044 kubelet[1326]: E0815 00:37:30.149496    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682250148925326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:37:40 ha-863044 kubelet[1326]: E0815 00:37:40.152293    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682260151642580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:37:40 ha-863044 kubelet[1326]: E0815 00:37:40.152721    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682260151642580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:37:50 ha-863044 kubelet[1326]: E0815 00:37:50.155581    1326 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682270155016398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:37:50 ha-863044 kubelet[1326]: E0815 00:37:50.155662    1326 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723682270155016398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 00:37:55.580057   39353 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19443-13088/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-863044 -n ha-863044
helpers_test.go:261: (dbg) Run:  kubectl --context ha-863044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-978269
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-978269
E0815 00:54:41.522805   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-978269: exit status 82 (2m1.757481571s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-978269-m03"  ...
	* Stopping node "multinode-978269-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-978269" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-978269 --wait=true -v=8 --alsologtostderr
E0815 00:57:44.592515   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:58:45.640937   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:59:41.522579   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-978269 --wait=true -v=8 --alsologtostderr: (3m20.64114796s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-978269
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-978269 -n multinode-978269
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-978269 logs -n 25: (1.395184659s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp multinode-978269-m02:/home/docker/cp-test.txt                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1195475749/001/cp-test_multinode-978269-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp multinode-978269-m02:/home/docker/cp-test.txt                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269:/home/docker/cp-test_multinode-978269-m02_multinode-978269.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n multinode-978269 sudo cat                                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | /home/docker/cp-test_multinode-978269-m02_multinode-978269.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp multinode-978269-m02:/home/docker/cp-test.txt                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m03:/home/docker/cp-test_multinode-978269-m02_multinode-978269-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n multinode-978269-m03 sudo cat                                   | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | /home/docker/cp-test_multinode-978269-m02_multinode-978269-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp testdata/cp-test.txt                                                | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp multinode-978269-m03:/home/docker/cp-test.txt                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1195475749/001/cp-test_multinode-978269-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp multinode-978269-m03:/home/docker/cp-test.txt                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269:/home/docker/cp-test_multinode-978269-m03_multinode-978269.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n multinode-978269 sudo cat                                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | /home/docker/cp-test_multinode-978269-m03_multinode-978269.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp multinode-978269-m03:/home/docker/cp-test.txt                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m02:/home/docker/cp-test_multinode-978269-m03_multinode-978269-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n multinode-978269-m02 sudo cat                                   | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | /home/docker/cp-test_multinode-978269-m03_multinode-978269-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-978269 node stop m03                                                          | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	| node    | multinode-978269 node start                                                             | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:54 UTC | 15 Aug 24 00:54 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-978269                                                                | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:54 UTC |                     |
	| stop    | -p multinode-978269                                                                     | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:54 UTC |                     |
	| start   | -p multinode-978269                                                                     | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:56 UTC | 15 Aug 24 01:00 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-978269                                                                | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 01:00 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:56:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:56:41.101611   49465 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:56:41.101727   49465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:56:41.101734   49465 out.go:304] Setting ErrFile to fd 2...
	I0815 00:56:41.101741   49465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:56:41.101911   49465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:56:41.102440   49465 out.go:298] Setting JSON to false
	I0815 00:56:41.103373   49465 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5946,"bootTime":1723677455,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:56:41.103427   49465 start.go:139] virtualization: kvm guest
	I0815 00:56:41.105597   49465 out.go:177] * [multinode-978269] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:56:41.106932   49465 notify.go:220] Checking for updates...
	I0815 00:56:41.106962   49465 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:56:41.108281   49465 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:56:41.109617   49465 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:56:41.110844   49465 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:56:41.111997   49465 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:56:41.113349   49465 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:56:41.114753   49465 config.go:182] Loaded profile config "multinode-978269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:56:41.114849   49465 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:56:41.115300   49465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:56:41.115372   49465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:56:41.131065   49465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44603
	I0815 00:56:41.131469   49465 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:56:41.132040   49465 main.go:141] libmachine: Using API Version  1
	I0815 00:56:41.132069   49465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:56:41.132503   49465 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:56:41.132705   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:56:41.168608   49465 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 00:56:41.170000   49465 start.go:297] selected driver: kvm2
	I0815 00:56:41.170024   49465 start.go:901] validating driver "kvm2" against &{Name:multinode-978269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-978269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.147 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:56:41.170163   49465 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:56:41.170565   49465 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:56:41.170670   49465 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 00:56:41.185662   49465 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 00:56:41.186346   49465 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:56:41.186415   49465 cni.go:84] Creating CNI manager for ""
	I0815 00:56:41.186427   49465 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 00:56:41.186496   49465 start.go:340] cluster config:
	{Name:multinode-978269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-978269 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.147 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:56:41.186636   49465 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:56:41.188412   49465 out.go:177] * Starting "multinode-978269" primary control-plane node in "multinode-978269" cluster
	I0815 00:56:41.189743   49465 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:56:41.189788   49465 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:56:41.189799   49465 cache.go:56] Caching tarball of preloaded images
	I0815 00:56:41.189882   49465 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:56:41.189894   49465 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:56:41.190041   49465 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/config.json ...
	I0815 00:56:41.190275   49465 start.go:360] acquireMachinesLock for multinode-978269: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 00:56:41.190352   49465 start.go:364] duration metric: took 32.543µs to acquireMachinesLock for "multinode-978269"
	I0815 00:56:41.190369   49465 start.go:96] Skipping create...Using existing machine configuration
	I0815 00:56:41.190380   49465 fix.go:54] fixHost starting: 
	I0815 00:56:41.190650   49465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:56:41.190687   49465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:56:41.205366   49465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0815 00:56:41.205835   49465 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:56:41.206291   49465 main.go:141] libmachine: Using API Version  1
	I0815 00:56:41.206334   49465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:56:41.206669   49465 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:56:41.206861   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:56:41.206999   49465 main.go:141] libmachine: (multinode-978269) Calling .GetState
	I0815 00:56:41.208931   49465 fix.go:112] recreateIfNeeded on multinode-978269: state=Running err=<nil>
	W0815 00:56:41.208968   49465 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 00:56:41.212032   49465 out.go:177] * Updating the running kvm2 "multinode-978269" VM ...
	I0815 00:56:41.213321   49465 machine.go:94] provisionDockerMachine start ...
	I0815 00:56:41.213372   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:56:41.213579   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:56:41.216261   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.216816   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.216846   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.217046   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:56:41.217227   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.217380   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.217496   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:56:41.217643   49465 main.go:141] libmachine: Using SSH client type: native
	I0815 00:56:41.217860   49465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0815 00:56:41.217873   49465 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 00:56:41.325965   49465 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-978269
	
	I0815 00:56:41.326004   49465 main.go:141] libmachine: (multinode-978269) Calling .GetMachineName
	I0815 00:56:41.326311   49465 buildroot.go:166] provisioning hostname "multinode-978269"
	I0815 00:56:41.326352   49465 main.go:141] libmachine: (multinode-978269) Calling .GetMachineName
	I0815 00:56:41.326529   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:56:41.329619   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.329962   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.329986   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.330176   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:56:41.330341   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.330538   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.330745   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:56:41.330947   49465 main.go:141] libmachine: Using SSH client type: native
	I0815 00:56:41.331134   49465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0815 00:56:41.331149   49465 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-978269 && echo "multinode-978269" | sudo tee /etc/hostname
	I0815 00:56:41.446894   49465 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-978269
	
	I0815 00:56:41.446921   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:56:41.449797   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.450235   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.450264   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.450475   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:56:41.450664   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.450796   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.451025   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:56:41.451178   49465 main.go:141] libmachine: Using SSH client type: native
	I0815 00:56:41.451357   49465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0815 00:56:41.451373   49465 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-978269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-978269/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-978269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:56:41.557435   49465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:56:41.557463   49465 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 00:56:41.557493   49465 buildroot.go:174] setting up certificates
	I0815 00:56:41.557502   49465 provision.go:84] configureAuth start
	I0815 00:56:41.557513   49465 main.go:141] libmachine: (multinode-978269) Calling .GetMachineName
	I0815 00:56:41.557800   49465 main.go:141] libmachine: (multinode-978269) Calling .GetIP
	I0815 00:56:41.560511   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.560914   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.560949   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.561086   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:56:41.563056   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.563381   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.563420   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.563529   49465 provision.go:143] copyHostCerts
	I0815 00:56:41.563582   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:56:41.563614   49465 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 00:56:41.563632   49465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:56:41.563707   49465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 00:56:41.563816   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:56:41.563839   49465 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 00:56:41.563844   49465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:56:41.563871   49465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 00:56:41.563933   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:56:41.563953   49465 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 00:56:41.563958   49465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:56:41.563981   49465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 00:56:41.564046   49465 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.multinode-978269 san=[127.0.0.1 192.168.39.9 localhost minikube multinode-978269]
	I0815 00:56:41.742696   49465 provision.go:177] copyRemoteCerts
	I0815 00:56:41.742761   49465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:56:41.742782   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:56:41.746032   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.746464   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.746491   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.746713   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:56:41.746909   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.747106   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:56:41.747246   49465 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/multinode-978269/id_rsa Username:docker}
	I0815 00:56:41.830850   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 00:56:41.830920   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0815 00:56:41.860573   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 00:56:41.860676   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 00:56:41.885044   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 00:56:41.885113   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:56:41.907449   49465 provision.go:87] duration metric: took 349.933653ms to configureAuth
	I0815 00:56:41.907477   49465 buildroot.go:189] setting minikube options for container-runtime
	I0815 00:56:41.907722   49465 config.go:182] Loaded profile config "multinode-978269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:56:41.907798   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:56:41.910554   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.910948   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.910976   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.911093   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:56:41.911277   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.911431   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.911600   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:56:41.911760   49465 main.go:141] libmachine: Using SSH client type: native
	I0815 00:56:41.911935   49465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0815 00:56:41.911948   49465 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:58:12.642699   49465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:58:12.642747   49465 machine.go:97] duration metric: took 1m31.429410222s to provisionDockerMachine
	I0815 00:58:12.642765   49465 start.go:293] postStartSetup for "multinode-978269" (driver="kvm2")
	I0815 00:58:12.642788   49465 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:58:12.642812   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:58:12.643184   49465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:58:12.643209   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:58:12.646807   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.647400   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:58:12.647442   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.647516   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:58:12.647715   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:58:12.647855   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:58:12.647993   49465 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/multinode-978269/id_rsa Username:docker}
	I0815 00:58:12.731510   49465 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:58:12.735601   49465 command_runner.go:130] > NAME=Buildroot
	I0815 00:58:12.735624   49465 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0815 00:58:12.735629   49465 command_runner.go:130] > ID=buildroot
	I0815 00:58:12.735634   49465 command_runner.go:130] > VERSION_ID=2023.02.9
	I0815 00:58:12.735641   49465 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0815 00:58:12.735678   49465 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 00:58:12.735697   49465 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 00:58:12.735776   49465 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 00:58:12.735874   49465 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 00:58:12.735888   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /etc/ssl/certs/202792.pem
	I0815 00:58:12.735987   49465 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 00:58:12.745028   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:58:12.767190   49465 start.go:296] duration metric: took 124.411832ms for postStartSetup
	I0815 00:58:12.767265   49465 fix.go:56] duration metric: took 1m31.576887329s for fixHost
	I0815 00:58:12.767292   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:58:12.769869   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.770254   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:58:12.770284   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.770452   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:58:12.770653   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:58:12.770816   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:58:12.770957   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:58:12.771122   49465 main.go:141] libmachine: Using SSH client type: native
	I0815 00:58:12.771301   49465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0815 00:58:12.771312   49465 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 00:58:12.873111   49465 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723683492.846810194
	
	I0815 00:58:12.873135   49465 fix.go:216] guest clock: 1723683492.846810194
	I0815 00:58:12.873158   49465 fix.go:229] Guest: 2024-08-15 00:58:12.846810194 +0000 UTC Remote: 2024-08-15 00:58:12.767274555 +0000 UTC m=+91.700030506 (delta=79.535639ms)
	I0815 00:58:12.873198   49465 fix.go:200] guest clock delta is within tolerance: 79.535639ms
	I0815 00:58:12.873206   49465 start.go:83] releasing machines lock for "multinode-978269", held for 1m31.682841428s
	I0815 00:58:12.873234   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:58:12.873474   49465 main.go:141] libmachine: (multinode-978269) Calling .GetIP
	I0815 00:58:12.876502   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.876844   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:58:12.876868   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.877048   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:58:12.877520   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:58:12.877725   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:58:12.877819   49465 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:58:12.877854   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:58:12.877969   49465 ssh_runner.go:195] Run: cat /version.json
	I0815 00:58:12.877993   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:58:12.880443   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.880621   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.880859   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:58:12.880885   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.881053   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:58:12.881074   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:58:12.881095   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.881196   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:58:12.881279   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:58:12.881338   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:58:12.881566   49465 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/multinode-978269/id_rsa Username:docker}
	I0815 00:58:12.881601   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:58:12.881746   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:58:12.881876   49465 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/multinode-978269/id_rsa Username:docker}
	I0815 00:58:12.957062   49465 command_runner.go:130] > {"iso_version": "v1.33.1-1723650137-19443", "kicbase_version": "v0.0.44-1723567951-19429", "minikube_version": "v1.33.1", "commit": "0de88034feeac7cdc6e3fa82af59b9e46ac52b3e"}
	I0815 00:58:12.957382   49465 ssh_runner.go:195] Run: systemctl --version
	I0815 00:58:12.994916   49465 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0815 00:58:12.994964   49465 command_runner.go:130] > systemd 252 (252)
	I0815 00:58:12.994984   49465 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0815 00:58:12.995062   49465 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:58:13.148312   49465 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 00:58:13.156190   49465 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0815 00:58:13.156254   49465 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 00:58:13.156334   49465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:58:13.165179   49465 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 00:58:13.165202   49465 start.go:495] detecting cgroup driver to use...
	I0815 00:58:13.165275   49465 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:58:13.181568   49465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:58:13.195299   49465 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:58:13.195355   49465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:58:13.208919   49465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:58:13.222255   49465 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:58:13.358438   49465 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:58:13.498305   49465 docker.go:233] disabling docker service ...
	I0815 00:58:13.498385   49465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:58:13.513976   49465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:58:13.526116   49465 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:58:13.662582   49465 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:58:13.808791   49465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:58:13.822004   49465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:58:13.841519   49465 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0815 00:58:13.841564   49465 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:58:13.841609   49465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.851437   49465 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:58:13.851509   49465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.861425   49465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.871028   49465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.880784   49465 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:58:13.912878   49465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.928498   49465 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.966878   49465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.984521   49465 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:58:14.000622   49465 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0815 00:58:14.000747   49465 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:58:14.011860   49465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:58:14.205579   49465 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:58:14.476619   49465 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:58:14.476711   49465 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:58:14.481349   49465 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0815 00:58:14.481375   49465 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0815 00:58:14.481383   49465 command_runner.go:130] > Device: 0,22	Inode: 1417        Links: 1
	I0815 00:58:14.481394   49465 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0815 00:58:14.481402   49465 command_runner.go:130] > Access: 2024-08-15 00:58:14.309430924 +0000
	I0815 00:58:14.481412   49465 command_runner.go:130] > Modify: 2024-08-15 00:58:14.309430924 +0000
	I0815 00:58:14.481422   49465 command_runner.go:130] > Change: 2024-08-15 00:58:14.309430924 +0000
	I0815 00:58:14.481430   49465 command_runner.go:130] >  Birth: -
	I0815 00:58:14.481447   49465 start.go:563] Will wait 60s for crictl version
	I0815 00:58:14.481491   49465 ssh_runner.go:195] Run: which crictl
	I0815 00:58:14.484938   49465 command_runner.go:130] > /usr/bin/crictl
	I0815 00:58:14.484997   49465 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:58:14.520674   49465 command_runner.go:130] > Version:  0.1.0
	I0815 00:58:14.520698   49465 command_runner.go:130] > RuntimeName:  cri-o
	I0815 00:58:14.520704   49465 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0815 00:58:14.520781   49465 command_runner.go:130] > RuntimeApiVersion:  v1
	I0815 00:58:14.522010   49465 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 00:58:14.522085   49465 ssh_runner.go:195] Run: crio --version
	I0815 00:58:14.548957   49465 command_runner.go:130] > crio version 1.29.1
	I0815 00:58:14.548987   49465 command_runner.go:130] > Version:        1.29.1
	I0815 00:58:14.548995   49465 command_runner.go:130] > GitCommit:      unknown
	I0815 00:58:14.549002   49465 command_runner.go:130] > GitCommitDate:  unknown
	I0815 00:58:14.549007   49465 command_runner.go:130] > GitTreeState:   clean
	I0815 00:58:14.549013   49465 command_runner.go:130] > BuildDate:      2024-08-14T19:54:05Z
	I0815 00:58:14.549017   49465 command_runner.go:130] > GoVersion:      go1.21.6
	I0815 00:58:14.549021   49465 command_runner.go:130] > Compiler:       gc
	I0815 00:58:14.549025   49465 command_runner.go:130] > Platform:       linux/amd64
	I0815 00:58:14.549029   49465 command_runner.go:130] > Linkmode:       dynamic
	I0815 00:58:14.549038   49465 command_runner.go:130] > BuildTags:      
	I0815 00:58:14.549050   49465 command_runner.go:130] >   containers_image_ostree_stub
	I0815 00:58:14.549055   49465 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0815 00:58:14.549059   49465 command_runner.go:130] >   btrfs_noversion
	I0815 00:58:14.549066   49465 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0815 00:58:14.549075   49465 command_runner.go:130] >   libdm_no_deferred_remove
	I0815 00:58:14.549084   49465 command_runner.go:130] >   seccomp
	I0815 00:58:14.549091   49465 command_runner.go:130] > LDFlags:          unknown
	I0815 00:58:14.549097   49465 command_runner.go:130] > SeccompEnabled:   true
	I0815 00:58:14.549104   49465 command_runner.go:130] > AppArmorEnabled:  false
	I0815 00:58:14.549183   49465 ssh_runner.go:195] Run: crio --version
	I0815 00:58:14.575442   49465 command_runner.go:130] > crio version 1.29.1
	I0815 00:58:14.575464   49465 command_runner.go:130] > Version:        1.29.1
	I0815 00:58:14.575470   49465 command_runner.go:130] > GitCommit:      unknown
	I0815 00:58:14.575474   49465 command_runner.go:130] > GitCommitDate:  unknown
	I0815 00:58:14.575478   49465 command_runner.go:130] > GitTreeState:   clean
	I0815 00:58:14.575484   49465 command_runner.go:130] > BuildDate:      2024-08-14T19:54:05Z
	I0815 00:58:14.575495   49465 command_runner.go:130] > GoVersion:      go1.21.6
	I0815 00:58:14.575499   49465 command_runner.go:130] > Compiler:       gc
	I0815 00:58:14.575505   49465 command_runner.go:130] > Platform:       linux/amd64
	I0815 00:58:14.575511   49465 command_runner.go:130] > Linkmode:       dynamic
	I0815 00:58:14.575519   49465 command_runner.go:130] > BuildTags:      
	I0815 00:58:14.575530   49465 command_runner.go:130] >   containers_image_ostree_stub
	I0815 00:58:14.575538   49465 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0815 00:58:14.575544   49465 command_runner.go:130] >   btrfs_noversion
	I0815 00:58:14.575551   49465 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0815 00:58:14.575561   49465 command_runner.go:130] >   libdm_no_deferred_remove
	I0815 00:58:14.575567   49465 command_runner.go:130] >   seccomp
	I0815 00:58:14.575576   49465 command_runner.go:130] > LDFlags:          unknown
	I0815 00:58:14.575582   49465 command_runner.go:130] > SeccompEnabled:   true
	I0815 00:58:14.575591   49465 command_runner.go:130] > AppArmorEnabled:  false
	I0815 00:58:14.577666   49465 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 00:58:14.578896   49465 main.go:141] libmachine: (multinode-978269) Calling .GetIP
	I0815 00:58:14.581441   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:14.581745   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:58:14.581767   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:14.581961   49465 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 00:58:14.585872   49465 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0815 00:58:14.585971   49465 kubeadm.go:883] updating cluster {Name:multinode-978269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-978269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.147 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:58:14.586117   49465 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:58:14.586177   49465 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:58:14.627352   49465 command_runner.go:130] > {
	I0815 00:58:14.627369   49465 command_runner.go:130] >   "images": [
	I0815 00:58:14.627373   49465 command_runner.go:130] >     {
	I0815 00:58:14.627380   49465 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0815 00:58:14.627385   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.627391   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0815 00:58:14.627395   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627400   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.627412   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0815 00:58:14.627425   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0815 00:58:14.627431   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627447   49465 command_runner.go:130] >       "size": "87165492",
	I0815 00:58:14.627454   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.627458   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.627463   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.627467   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.627473   49465 command_runner.go:130] >     },
	I0815 00:58:14.627477   49465 command_runner.go:130] >     {
	I0815 00:58:14.627483   49465 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0815 00:58:14.627492   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.627506   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0815 00:58:14.627515   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627522   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.627536   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0815 00:58:14.627551   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0815 00:58:14.627557   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627561   49465 command_runner.go:130] >       "size": "87190579",
	I0815 00:58:14.627567   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.627578   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.627587   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.627597   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.627605   49465 command_runner.go:130] >     },
	I0815 00:58:14.627611   49465 command_runner.go:130] >     {
	I0815 00:58:14.627623   49465 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0815 00:58:14.627633   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.627643   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0815 00:58:14.627651   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627655   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.627669   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0815 00:58:14.627684   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0815 00:58:14.627692   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627700   49465 command_runner.go:130] >       "size": "1363676",
	I0815 00:58:14.627708   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.627715   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.627725   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.627733   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.627739   49465 command_runner.go:130] >     },
	I0815 00:58:14.627749   49465 command_runner.go:130] >     {
	I0815 00:58:14.627762   49465 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0815 00:58:14.627772   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.627782   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0815 00:58:14.627788   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627797   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.627811   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0815 00:58:14.627830   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0815 00:58:14.627838   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627848   49465 command_runner.go:130] >       "size": "31470524",
	I0815 00:58:14.627858   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.627867   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.627873   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.627882   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.627890   49465 command_runner.go:130] >     },
	I0815 00:58:14.627898   49465 command_runner.go:130] >     {
	I0815 00:58:14.627907   49465 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0815 00:58:14.627913   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.627921   49465 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0815 00:58:14.627929   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627939   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.627953   49465 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0815 00:58:14.627967   49465 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0815 00:58:14.627976   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627985   49465 command_runner.go:130] >       "size": "61245718",
	I0815 00:58:14.627993   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.628000   49465 command_runner.go:130] >       "username": "nonroot",
	I0815 00:58:14.628006   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.628015   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.628024   49465 command_runner.go:130] >     },
	I0815 00:58:14.628032   49465 command_runner.go:130] >     {
	I0815 00:58:14.628042   49465 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0815 00:58:14.628051   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.628060   49465 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0815 00:58:14.628069   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628077   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.628093   49465 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0815 00:58:14.628103   49465 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0815 00:58:14.628109   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628115   49465 command_runner.go:130] >       "size": "149009664",
	I0815 00:58:14.628121   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.628128   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.628132   49465 command_runner.go:130] >       },
	I0815 00:58:14.628138   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.628145   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.628152   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.628158   49465 command_runner.go:130] >     },
	I0815 00:58:14.628165   49465 command_runner.go:130] >     {
	I0815 00:58:14.628171   49465 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0815 00:58:14.628179   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.628190   49465 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0815 00:58:14.628199   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628209   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.628223   49465 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0815 00:58:14.628237   49465 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0815 00:58:14.628245   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628253   49465 command_runner.go:130] >       "size": "95233506",
	I0815 00:58:14.628256   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.628268   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.628276   49465 command_runner.go:130] >       },
	I0815 00:58:14.628286   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.628295   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.628304   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.628312   49465 command_runner.go:130] >     },
	I0815 00:58:14.628318   49465 command_runner.go:130] >     {
	I0815 00:58:14.628331   49465 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0815 00:58:14.628339   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.628344   49465 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0815 00:58:14.628352   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628362   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.628393   49465 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0815 00:58:14.628409   49465 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0815 00:58:14.628419   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628427   49465 command_runner.go:130] >       "size": "89437512",
	I0815 00:58:14.628431   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.628440   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.628448   49465 command_runner.go:130] >       },
	I0815 00:58:14.628457   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.628466   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.628473   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.628478   49465 command_runner.go:130] >     },
	I0815 00:58:14.628484   49465 command_runner.go:130] >     {
	I0815 00:58:14.628493   49465 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0815 00:58:14.628499   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.628507   49465 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0815 00:58:14.628511   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628514   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.628524   49465 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0815 00:58:14.628536   49465 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0815 00:58:14.628542   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628548   49465 command_runner.go:130] >       "size": "92728217",
	I0815 00:58:14.628555   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.628564   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.628570   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.628578   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.628585   49465 command_runner.go:130] >     },
	I0815 00:58:14.628591   49465 command_runner.go:130] >     {
	I0815 00:58:14.628600   49465 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0815 00:58:14.628608   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.628619   49465 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0815 00:58:14.628627   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628634   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.628646   49465 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0815 00:58:14.628676   49465 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0815 00:58:14.628685   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628691   49465 command_runner.go:130] >       "size": "68420936",
	I0815 00:58:14.628700   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.628710   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.628724   49465 command_runner.go:130] >       },
	I0815 00:58:14.628981   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.629023   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.629030   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.629036   49465 command_runner.go:130] >     },
	I0815 00:58:14.629043   49465 command_runner.go:130] >     {
	I0815 00:58:14.629063   49465 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0815 00:58:14.629070   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.629077   49465 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0815 00:58:14.629104   49465 command_runner.go:130] >       ],
	I0815 00:58:14.629110   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.629134   49465 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0815 00:58:14.629145   49465 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0815 00:58:14.629151   49465 command_runner.go:130] >       ],
	I0815 00:58:14.629158   49465 command_runner.go:130] >       "size": "742080",
	I0815 00:58:14.629169   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.629176   49465 command_runner.go:130] >         "value": "65535"
	I0815 00:58:14.629181   49465 command_runner.go:130] >       },
	I0815 00:58:14.629187   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.629193   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.629199   49465 command_runner.go:130] >       "pinned": true
	I0815 00:58:14.629209   49465 command_runner.go:130] >     }
	I0815 00:58:14.629214   49465 command_runner.go:130] >   ]
	I0815 00:58:14.629219   49465 command_runner.go:130] > }
	I0815 00:58:14.629499   49465 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:58:14.629511   49465 crio.go:433] Images already preloaded, skipping extraction
	I0815 00:58:14.629577   49465 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:58:14.663119   49465 command_runner.go:130] > {
	I0815 00:58:14.663141   49465 command_runner.go:130] >   "images": [
	I0815 00:58:14.663145   49465 command_runner.go:130] >     {
	I0815 00:58:14.663154   49465 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0815 00:58:14.663160   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663167   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0815 00:58:14.663171   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663174   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663183   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0815 00:58:14.663204   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0815 00:58:14.663213   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663219   49465 command_runner.go:130] >       "size": "87165492",
	I0815 00:58:14.663229   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.663235   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663246   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663255   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663260   49465 command_runner.go:130] >     },
	I0815 00:58:14.663267   49465 command_runner.go:130] >     {
	I0815 00:58:14.663277   49465 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0815 00:58:14.663285   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663294   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0815 00:58:14.663299   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663305   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663316   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0815 00:58:14.663338   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0815 00:58:14.663345   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663349   49465 command_runner.go:130] >       "size": "87190579",
	I0815 00:58:14.663353   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.663363   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663367   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663372   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663380   49465 command_runner.go:130] >     },
	I0815 00:58:14.663386   49465 command_runner.go:130] >     {
	I0815 00:58:14.663397   49465 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0815 00:58:14.663409   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663418   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0815 00:58:14.663422   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663426   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663434   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0815 00:58:14.663443   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0815 00:58:14.663448   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663452   49465 command_runner.go:130] >       "size": "1363676",
	I0815 00:58:14.663458   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.663463   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663474   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663480   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663484   49465 command_runner.go:130] >     },
	I0815 00:58:14.663489   49465 command_runner.go:130] >     {
	I0815 00:58:14.663495   49465 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0815 00:58:14.663501   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663506   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0815 00:58:14.663512   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663516   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663525   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0815 00:58:14.663538   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0815 00:58:14.663544   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663565   49465 command_runner.go:130] >       "size": "31470524",
	I0815 00:58:14.663574   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.663579   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663585   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663589   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663593   49465 command_runner.go:130] >     },
	I0815 00:58:14.663597   49465 command_runner.go:130] >     {
	I0815 00:58:14.663605   49465 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0815 00:58:14.663610   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663617   49465 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0815 00:58:14.663621   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663626   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663634   49465 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0815 00:58:14.663642   49465 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0815 00:58:14.663646   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663652   49465 command_runner.go:130] >       "size": "61245718",
	I0815 00:58:14.663656   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.663663   49465 command_runner.go:130] >       "username": "nonroot",
	I0815 00:58:14.663667   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663673   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663677   49465 command_runner.go:130] >     },
	I0815 00:58:14.663682   49465 command_runner.go:130] >     {
	I0815 00:58:14.663689   49465 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0815 00:58:14.663695   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663700   49465 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0815 00:58:14.663706   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663710   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663723   49465 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0815 00:58:14.663732   49465 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0815 00:58:14.663735   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663739   49465 command_runner.go:130] >       "size": "149009664",
	I0815 00:58:14.663745   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.663749   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.663757   49465 command_runner.go:130] >       },
	I0815 00:58:14.663764   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663768   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663774   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663778   49465 command_runner.go:130] >     },
	I0815 00:58:14.663784   49465 command_runner.go:130] >     {
	I0815 00:58:14.663790   49465 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0815 00:58:14.663796   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663800   49465 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0815 00:58:14.663806   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663810   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663819   49465 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0815 00:58:14.663826   49465 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0815 00:58:14.663832   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663836   49465 command_runner.go:130] >       "size": "95233506",
	I0815 00:58:14.663842   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.663846   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.663852   49465 command_runner.go:130] >       },
	I0815 00:58:14.663855   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663860   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663864   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663869   49465 command_runner.go:130] >     },
	I0815 00:58:14.663872   49465 command_runner.go:130] >     {
	I0815 00:58:14.663882   49465 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0815 00:58:14.663887   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663893   49465 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0815 00:58:14.663898   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663902   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663918   49465 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0815 00:58:14.663928   49465 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0815 00:58:14.663933   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663937   49465 command_runner.go:130] >       "size": "89437512",
	I0815 00:58:14.663943   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.663947   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.663953   49465 command_runner.go:130] >       },
	I0815 00:58:14.663957   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663963   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663968   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663973   49465 command_runner.go:130] >     },
	I0815 00:58:14.663976   49465 command_runner.go:130] >     {
	I0815 00:58:14.663984   49465 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0815 00:58:14.663988   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663993   49465 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0815 00:58:14.663997   49465 command_runner.go:130] >       ],
	I0815 00:58:14.664002   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.664010   49465 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0815 00:58:14.664021   49465 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0815 00:58:14.664027   49465 command_runner.go:130] >       ],
	I0815 00:58:14.664031   49465 command_runner.go:130] >       "size": "92728217",
	I0815 00:58:14.664036   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.664040   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.664046   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.664051   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.664056   49465 command_runner.go:130] >     },
	I0815 00:58:14.664059   49465 command_runner.go:130] >     {
	I0815 00:58:14.664067   49465 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0815 00:58:14.664071   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.664077   49465 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0815 00:58:14.664083   49465 command_runner.go:130] >       ],
	I0815 00:58:14.664087   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.664094   49465 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0815 00:58:14.664102   49465 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0815 00:58:14.664105   49465 command_runner.go:130] >       ],
	I0815 00:58:14.664109   49465 command_runner.go:130] >       "size": "68420936",
	I0815 00:58:14.664115   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.664119   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.664122   49465 command_runner.go:130] >       },
	I0815 00:58:14.664126   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.664130   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.664134   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.664137   49465 command_runner.go:130] >     },
	I0815 00:58:14.664140   49465 command_runner.go:130] >     {
	I0815 00:58:14.664146   49465 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0815 00:58:14.664152   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.664156   49465 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0815 00:58:14.664160   49465 command_runner.go:130] >       ],
	I0815 00:58:14.664164   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.664171   49465 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0815 00:58:14.664179   49465 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0815 00:58:14.664183   49465 command_runner.go:130] >       ],
	I0815 00:58:14.664188   49465 command_runner.go:130] >       "size": "742080",
	I0815 00:58:14.664191   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.664197   49465 command_runner.go:130] >         "value": "65535"
	I0815 00:58:14.664200   49465 command_runner.go:130] >       },
	I0815 00:58:14.664204   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.664210   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.664214   49465 command_runner.go:130] >       "pinned": true
	I0815 00:58:14.664220   49465 command_runner.go:130] >     }
	I0815 00:58:14.664223   49465 command_runner.go:130] >   ]
	I0815 00:58:14.664228   49465 command_runner.go:130] > }
	I0815 00:58:14.664355   49465 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:58:14.664365   49465 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:58:14.664380   49465 kubeadm.go:934] updating node { 192.168.39.9 8443 v1.31.0 crio true true} ...
	I0815 00:58:14.664529   49465 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-978269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-978269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:58:14.664600   49465 ssh_runner.go:195] Run: crio config
	I0815 00:58:14.696916   49465 command_runner.go:130] ! time="2024-08-15 00:58:14.670648091Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0815 00:58:14.702709   49465 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0815 00:58:14.708286   49465 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0815 00:58:14.708313   49465 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0815 00:58:14.708320   49465 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0815 00:58:14.708323   49465 command_runner.go:130] > #
	I0815 00:58:14.708331   49465 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0815 00:58:14.708337   49465 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0815 00:58:14.708343   49465 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0815 00:58:14.708351   49465 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0815 00:58:14.708356   49465 command_runner.go:130] > # reload'.
	I0815 00:58:14.708364   49465 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0815 00:58:14.708373   49465 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0815 00:58:14.708383   49465 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0815 00:58:14.708392   49465 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0815 00:58:14.708399   49465 command_runner.go:130] > [crio]
	I0815 00:58:14.708406   49465 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0815 00:58:14.708412   49465 command_runner.go:130] > # containers images, in this directory.
	I0815 00:58:14.708417   49465 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0815 00:58:14.708428   49465 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0815 00:58:14.708436   49465 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0815 00:58:14.708443   49465 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0815 00:58:14.708448   49465 command_runner.go:130] > # imagestore = ""
	I0815 00:58:14.708454   49465 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0815 00:58:14.708463   49465 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0815 00:58:14.708471   49465 command_runner.go:130] > storage_driver = "overlay"
	I0815 00:58:14.708480   49465 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0815 00:58:14.708490   49465 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0815 00:58:14.708499   49465 command_runner.go:130] > storage_option = [
	I0815 00:58:14.708508   49465 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0815 00:58:14.708511   49465 command_runner.go:130] > ]
	I0815 00:58:14.708517   49465 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0815 00:58:14.708525   49465 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0815 00:58:14.708529   49465 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0815 00:58:14.708535   49465 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0815 00:58:14.708547   49465 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0815 00:58:14.708558   49465 command_runner.go:130] > # always happen on a node reboot
	I0815 00:58:14.708568   49465 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0815 00:58:14.708588   49465 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0815 00:58:14.708601   49465 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0815 00:58:14.708608   49465 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0815 00:58:14.708616   49465 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0815 00:58:14.708623   49465 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0815 00:58:14.708632   49465 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0815 00:58:14.708638   49465 command_runner.go:130] > # internal_wipe = true
	I0815 00:58:14.708646   49465 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0815 00:58:14.708667   49465 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0815 00:58:14.708677   49465 command_runner.go:130] > # internal_repair = false
	I0815 00:58:14.708686   49465 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0815 00:58:14.708708   49465 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0815 00:58:14.708719   49465 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0815 00:58:14.708730   49465 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0815 00:58:14.708742   49465 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0815 00:58:14.708749   49465 command_runner.go:130] > [crio.api]
	I0815 00:58:14.708754   49465 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0815 00:58:14.708763   49465 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0815 00:58:14.708774   49465 command_runner.go:130] > # IP address on which the stream server will listen.
	I0815 00:58:14.708784   49465 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0815 00:58:14.708795   49465 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0815 00:58:14.708806   49465 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0815 00:58:14.708815   49465 command_runner.go:130] > # stream_port = "0"
	I0815 00:58:14.708827   49465 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0815 00:58:14.708836   49465 command_runner.go:130] > # stream_enable_tls = false
	I0815 00:58:14.708848   49465 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0815 00:58:14.708856   49465 command_runner.go:130] > # stream_idle_timeout = ""
	I0815 00:58:14.708868   49465 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0815 00:58:14.708881   49465 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0815 00:58:14.708890   49465 command_runner.go:130] > # minutes.
	I0815 00:58:14.708897   49465 command_runner.go:130] > # stream_tls_cert = ""
	I0815 00:58:14.708909   49465 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0815 00:58:14.708921   49465 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0815 00:58:14.708931   49465 command_runner.go:130] > # stream_tls_key = ""
	I0815 00:58:14.708941   49465 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0815 00:58:14.708951   49465 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0815 00:58:14.708988   49465 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0815 00:58:14.708999   49465 command_runner.go:130] > # stream_tls_ca = ""
	I0815 00:58:14.709010   49465 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0815 00:58:14.709025   49465 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0815 00:58:14.709039   49465 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0815 00:58:14.709049   49465 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0815 00:58:14.709061   49465 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0815 00:58:14.709073   49465 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0815 00:58:14.709080   49465 command_runner.go:130] > [crio.runtime]
	I0815 00:58:14.709087   49465 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0815 00:58:14.709099   49465 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0815 00:58:14.709109   49465 command_runner.go:130] > # "nofile=1024:2048"
	I0815 00:58:14.709119   49465 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0815 00:58:14.709128   49465 command_runner.go:130] > # default_ulimits = [
	I0815 00:58:14.709136   49465 command_runner.go:130] > # ]
	I0815 00:58:14.709145   49465 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0815 00:58:14.709154   49465 command_runner.go:130] > # no_pivot = false
	I0815 00:58:14.709164   49465 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0815 00:58:14.709175   49465 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0815 00:58:14.709183   49465 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0815 00:58:14.709191   49465 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0815 00:58:14.709201   49465 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0815 00:58:14.709215   49465 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0815 00:58:14.709226   49465 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0815 00:58:14.709235   49465 command_runner.go:130] > # Cgroup setting for conmon
	I0815 00:58:14.709248   49465 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0815 00:58:14.709258   49465 command_runner.go:130] > conmon_cgroup = "pod"
	I0815 00:58:14.709270   49465 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0815 00:58:14.709279   49465 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0815 00:58:14.709310   49465 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0815 00:58:14.709321   49465 command_runner.go:130] > conmon_env = [
	I0815 00:58:14.709330   49465 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0815 00:58:14.709339   49465 command_runner.go:130] > ]
	I0815 00:58:14.709347   49465 command_runner.go:130] > # Additional environment variables to set for all the
	I0815 00:58:14.709357   49465 command_runner.go:130] > # containers. These are overridden if set in the
	I0815 00:58:14.709368   49465 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0815 00:58:14.709384   49465 command_runner.go:130] > # default_env = [
	I0815 00:58:14.709391   49465 command_runner.go:130] > # ]
	I0815 00:58:14.709397   49465 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0815 00:58:14.709411   49465 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0815 00:58:14.709421   49465 command_runner.go:130] > # selinux = false
	I0815 00:58:14.709431   49465 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0815 00:58:14.709443   49465 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0815 00:58:14.709455   49465 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0815 00:58:14.709464   49465 command_runner.go:130] > # seccomp_profile = ""
	I0815 00:58:14.709476   49465 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0815 00:58:14.709487   49465 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0815 00:58:14.709496   49465 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0815 00:58:14.709502   49465 command_runner.go:130] > # which might increase security.
	I0815 00:58:14.709513   49465 command_runner.go:130] > # This option is currently deprecated,
	I0815 00:58:14.709525   49465 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0815 00:58:14.709534   49465 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0815 00:58:14.709544   49465 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0815 00:58:14.709557   49465 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0815 00:58:14.709568   49465 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0815 00:58:14.709581   49465 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0815 00:58:14.709590   49465 command_runner.go:130] > # This option supports live configuration reload.
	I0815 00:58:14.709597   49465 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0815 00:58:14.709606   49465 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0815 00:58:14.709616   49465 command_runner.go:130] > # the cgroup blockio controller.
	I0815 00:58:14.709626   49465 command_runner.go:130] > # blockio_config_file = ""
	I0815 00:58:14.709637   49465 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0815 00:58:14.709646   49465 command_runner.go:130] > # blockio parameters.
	I0815 00:58:14.709656   49465 command_runner.go:130] > # blockio_reload = false
	I0815 00:58:14.709668   49465 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0815 00:58:14.709676   49465 command_runner.go:130] > # irqbalance daemon.
	I0815 00:58:14.709682   49465 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0815 00:58:14.709697   49465 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0815 00:58:14.709711   49465 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0815 00:58:14.709725   49465 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0815 00:58:14.709737   49465 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0815 00:58:14.709750   49465 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0815 00:58:14.709768   49465 command_runner.go:130] > # This option supports live configuration reload.
	I0815 00:58:14.709776   49465 command_runner.go:130] > # rdt_config_file = ""
	I0815 00:58:14.709781   49465 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0815 00:58:14.709789   49465 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0815 00:58:14.709831   49465 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0815 00:58:14.709842   49465 command_runner.go:130] > # separate_pull_cgroup = ""
	I0815 00:58:14.709855   49465 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0815 00:58:14.709864   49465 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0815 00:58:14.709871   49465 command_runner.go:130] > # will be added.
	I0815 00:58:14.709878   49465 command_runner.go:130] > # default_capabilities = [
	I0815 00:58:14.709886   49465 command_runner.go:130] > # 	"CHOWN",
	I0815 00:58:14.709893   49465 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0815 00:58:14.709907   49465 command_runner.go:130] > # 	"FSETID",
	I0815 00:58:14.709916   49465 command_runner.go:130] > # 	"FOWNER",
	I0815 00:58:14.709922   49465 command_runner.go:130] > # 	"SETGID",
	I0815 00:58:14.709929   49465 command_runner.go:130] > # 	"SETUID",
	I0815 00:58:14.709938   49465 command_runner.go:130] > # 	"SETPCAP",
	I0815 00:58:14.709944   49465 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0815 00:58:14.709952   49465 command_runner.go:130] > # 	"KILL",
	I0815 00:58:14.709957   49465 command_runner.go:130] > # ]
	I0815 00:58:14.709967   49465 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0815 00:58:14.709984   49465 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0815 00:58:14.709995   49465 command_runner.go:130] > # add_inheritable_capabilities = false
	I0815 00:58:14.710025   49465 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0815 00:58:14.710041   49465 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0815 00:58:14.710050   49465 command_runner.go:130] > default_sysctls = [
	I0815 00:58:14.710060   49465 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0815 00:58:14.710066   49465 command_runner.go:130] > ]
	I0815 00:58:14.710070   49465 command_runner.go:130] > # List of devices on the host that a
	I0815 00:58:14.710082   49465 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0815 00:58:14.710093   49465 command_runner.go:130] > # allowed_devices = [
	I0815 00:58:14.710099   49465 command_runner.go:130] > # 	"/dev/fuse",
	I0815 00:58:14.710108   49465 command_runner.go:130] > # ]
	I0815 00:58:14.710118   49465 command_runner.go:130] > # List of additional devices. specified as
	I0815 00:58:14.710131   49465 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0815 00:58:14.710142   49465 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0815 00:58:14.710163   49465 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0815 00:58:14.710171   49465 command_runner.go:130] > # additional_devices = [
	I0815 00:58:14.710176   49465 command_runner.go:130] > # ]
	I0815 00:58:14.710187   49465 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0815 00:58:14.710197   49465 command_runner.go:130] > # cdi_spec_dirs = [
	I0815 00:58:14.710206   49465 command_runner.go:130] > # 	"/etc/cdi",
	I0815 00:58:14.710214   49465 command_runner.go:130] > # 	"/var/run/cdi",
	I0815 00:58:14.710219   49465 command_runner.go:130] > # ]
	I0815 00:58:14.710232   49465 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0815 00:58:14.710244   49465 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0815 00:58:14.710251   49465 command_runner.go:130] > # Defaults to false.
	I0815 00:58:14.710256   49465 command_runner.go:130] > # device_ownership_from_security_context = false
	I0815 00:58:14.710268   49465 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0815 00:58:14.710281   49465 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0815 00:58:14.710287   49465 command_runner.go:130] > # hooks_dir = [
	I0815 00:58:14.710301   49465 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0815 00:58:14.710310   49465 command_runner.go:130] > # ]
	I0815 00:58:14.710319   49465 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0815 00:58:14.710332   49465 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0815 00:58:14.710342   49465 command_runner.go:130] > # its default mounts from the following two files:
	I0815 00:58:14.710348   49465 command_runner.go:130] > #
	I0815 00:58:14.710354   49465 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0815 00:58:14.710366   49465 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0815 00:58:14.710378   49465 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0815 00:58:14.710385   49465 command_runner.go:130] > #
	I0815 00:58:14.710397   49465 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0815 00:58:14.710410   49465 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0815 00:58:14.710423   49465 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0815 00:58:14.710433   49465 command_runner.go:130] > #      only add mounts it finds in this file.
	I0815 00:58:14.710441   49465 command_runner.go:130] > #
	I0815 00:58:14.710447   49465 command_runner.go:130] > # default_mounts_file = ""
	I0815 00:58:14.710456   49465 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0815 00:58:14.710470   49465 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0815 00:58:14.710479   49465 command_runner.go:130] > pids_limit = 1024
	I0815 00:58:14.710492   49465 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0815 00:58:14.710504   49465 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0815 00:58:14.710525   49465 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0815 00:58:14.710537   49465 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0815 00:58:14.710543   49465 command_runner.go:130] > # log_size_max = -1
	I0815 00:58:14.710556   49465 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0815 00:58:14.710569   49465 command_runner.go:130] > # log_to_journald = false
	I0815 00:58:14.710581   49465 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0815 00:58:14.710592   49465 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0815 00:58:14.710604   49465 command_runner.go:130] > # Path to directory for container attach sockets.
	I0815 00:58:14.710615   49465 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0815 00:58:14.710627   49465 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0815 00:58:14.710634   49465 command_runner.go:130] > # bind_mount_prefix = ""
	I0815 00:58:14.710641   49465 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0815 00:58:14.710649   49465 command_runner.go:130] > # read_only = false
	I0815 00:58:14.710662   49465 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0815 00:58:14.710675   49465 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0815 00:58:14.710685   49465 command_runner.go:130] > # live configuration reload.
	I0815 00:58:14.710695   49465 command_runner.go:130] > # log_level = "info"
	I0815 00:58:14.710704   49465 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0815 00:58:14.710714   49465 command_runner.go:130] > # This option supports live configuration reload.
	I0815 00:58:14.710721   49465 command_runner.go:130] > # log_filter = ""
	I0815 00:58:14.710727   49465 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0815 00:58:14.710751   49465 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0815 00:58:14.710762   49465 command_runner.go:130] > # separated by comma.
	I0815 00:58:14.710774   49465 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 00:58:14.710783   49465 command_runner.go:130] > # uid_mappings = ""
	I0815 00:58:14.710796   49465 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0815 00:58:14.710808   49465 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0815 00:58:14.710817   49465 command_runner.go:130] > # separated by comma.
	I0815 00:58:14.710831   49465 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 00:58:14.710837   49465 command_runner.go:130] > # gid_mappings = ""
	I0815 00:58:14.710846   49465 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0815 00:58:14.710859   49465 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0815 00:58:14.710872   49465 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0815 00:58:14.710887   49465 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 00:58:14.710897   49465 command_runner.go:130] > # minimum_mappable_uid = -1
	I0815 00:58:14.710909   49465 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0815 00:58:14.710927   49465 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0815 00:58:14.710938   49465 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0815 00:58:14.710953   49465 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 00:58:14.710966   49465 command_runner.go:130] > # minimum_mappable_gid = -1
	I0815 00:58:14.710978   49465 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0815 00:58:14.710990   49465 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0815 00:58:14.711002   49465 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0815 00:58:14.711010   49465 command_runner.go:130] > # ctr_stop_timeout = 30
	I0815 00:58:14.711016   49465 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0815 00:58:14.711027   49465 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0815 00:58:14.711038   49465 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0815 00:58:14.711047   49465 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0815 00:58:14.711056   49465 command_runner.go:130] > drop_infra_ctr = false
	I0815 00:58:14.711069   49465 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0815 00:58:14.711081   49465 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0815 00:58:14.711094   49465 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0815 00:58:14.711102   49465 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0815 00:58:14.711110   49465 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0815 00:58:14.711121   49465 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0815 00:58:14.711133   49465 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0815 00:58:14.711142   49465 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0815 00:58:14.711152   49465 command_runner.go:130] > # shared_cpuset = ""
	I0815 00:58:14.711161   49465 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0815 00:58:14.711170   49465 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0815 00:58:14.711178   49465 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0815 00:58:14.711191   49465 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0815 00:58:14.711199   49465 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0815 00:58:14.711205   49465 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0815 00:58:14.711216   49465 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0815 00:58:14.711226   49465 command_runner.go:130] > # enable_criu_support = false
	I0815 00:58:14.711238   49465 command_runner.go:130] > # Enable/disable the generation of the container,
	I0815 00:58:14.711251   49465 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0815 00:58:14.711261   49465 command_runner.go:130] > # enable_pod_events = false
	I0815 00:58:14.711273   49465 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0815 00:58:14.711285   49465 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0815 00:58:14.711291   49465 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0815 00:58:14.711310   49465 command_runner.go:130] > # default_runtime = "runc"
	I0815 00:58:14.711322   49465 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0815 00:58:14.711334   49465 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0815 00:58:14.711351   49465 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0815 00:58:14.711367   49465 command_runner.go:130] > # creation as a file is not desired either.
	I0815 00:58:14.711382   49465 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0815 00:58:14.711391   49465 command_runner.go:130] > # the hostname is being managed dynamically.
	I0815 00:58:14.711395   49465 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0815 00:58:14.711403   49465 command_runner.go:130] > # ]
	I0815 00:58:14.711412   49465 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0815 00:58:14.711425   49465 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0815 00:58:14.711438   49465 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0815 00:58:14.711449   49465 command_runner.go:130] > # Each entry in the table should follow the format:
	I0815 00:58:14.711456   49465 command_runner.go:130] > #
	I0815 00:58:14.711464   49465 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0815 00:58:14.711475   49465 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0815 00:58:14.711519   49465 command_runner.go:130] > # runtime_type = "oci"
	I0815 00:58:14.711531   49465 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0815 00:58:14.711538   49465 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0815 00:58:14.711548   49465 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0815 00:58:14.711558   49465 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0815 00:58:14.711567   49465 command_runner.go:130] > # monitor_env = []
	I0815 00:58:14.711576   49465 command_runner.go:130] > # privileged_without_host_devices = false
	I0815 00:58:14.711584   49465 command_runner.go:130] > # allowed_annotations = []
	I0815 00:58:14.711596   49465 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0815 00:58:14.711604   49465 command_runner.go:130] > # Where:
	I0815 00:58:14.711612   49465 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0815 00:58:14.711625   49465 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0815 00:58:14.711638   49465 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0815 00:58:14.711651   49465 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0815 00:58:14.711659   49465 command_runner.go:130] > #   in $PATH.
	I0815 00:58:14.711679   49465 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0815 00:58:14.711691   49465 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0815 00:58:14.711703   49465 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0815 00:58:14.711710   49465 command_runner.go:130] > #   state.
	I0815 00:58:14.711720   49465 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0815 00:58:14.711734   49465 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0815 00:58:14.711748   49465 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0815 00:58:14.711760   49465 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0815 00:58:14.711772   49465 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0815 00:58:14.711785   49465 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0815 00:58:14.711798   49465 command_runner.go:130] > #   The currently recognized values are:
	I0815 00:58:14.711807   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0815 00:58:14.711822   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0815 00:58:14.711834   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0815 00:58:14.711848   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0815 00:58:14.711862   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0815 00:58:14.711874   49465 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0815 00:58:14.711887   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0815 00:58:14.711899   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0815 00:58:14.711907   49465 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0815 00:58:14.711917   49465 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0815 00:58:14.711927   49465 command_runner.go:130] > #   deprecated option "conmon".
	I0815 00:58:14.711938   49465 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0815 00:58:14.711949   49465 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0815 00:58:14.711963   49465 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0815 00:58:14.711973   49465 command_runner.go:130] > #   should be moved to the container's cgroup
	I0815 00:58:14.711991   49465 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0815 00:58:14.711999   49465 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0815 00:58:14.712007   49465 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0815 00:58:14.712017   49465 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0815 00:58:14.712026   49465 command_runner.go:130] > #
	I0815 00:58:14.712033   49465 command_runner.go:130] > # Using the seccomp notifier feature:
	I0815 00:58:14.712042   49465 command_runner.go:130] > #
	I0815 00:58:14.712052   49465 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0815 00:58:14.712065   49465 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0815 00:58:14.712073   49465 command_runner.go:130] > #
	I0815 00:58:14.712082   49465 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0815 00:58:14.712094   49465 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0815 00:58:14.712100   49465 command_runner.go:130] > #
	I0815 00:58:14.712108   49465 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0815 00:58:14.712117   49465 command_runner.go:130] > # feature.
	I0815 00:58:14.712126   49465 command_runner.go:130] > #
	I0815 00:58:14.712138   49465 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0815 00:58:14.712150   49465 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0815 00:58:14.712162   49465 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0815 00:58:14.712178   49465 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0815 00:58:14.712187   49465 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0815 00:58:14.712194   49465 command_runner.go:130] > #
	I0815 00:58:14.712204   49465 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0815 00:58:14.712217   49465 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0815 00:58:14.712226   49465 command_runner.go:130] > #
	I0815 00:58:14.712236   49465 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0815 00:58:14.712247   49465 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0815 00:58:14.712255   49465 command_runner.go:130] > #
	I0815 00:58:14.712269   49465 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0815 00:58:14.712281   49465 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0815 00:58:14.712288   49465 command_runner.go:130] > # limitation.
	I0815 00:58:14.712299   49465 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0815 00:58:14.712309   49465 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0815 00:58:14.712316   49465 command_runner.go:130] > runtime_type = "oci"
	I0815 00:58:14.712326   49465 command_runner.go:130] > runtime_root = "/run/runc"
	I0815 00:58:14.712335   49465 command_runner.go:130] > runtime_config_path = ""
	I0815 00:58:14.712344   49465 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0815 00:58:14.712353   49465 command_runner.go:130] > monitor_cgroup = "pod"
	I0815 00:58:14.712360   49465 command_runner.go:130] > monitor_exec_cgroup = ""
	I0815 00:58:14.712369   49465 command_runner.go:130] > monitor_env = [
	I0815 00:58:14.712378   49465 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0815 00:58:14.712384   49465 command_runner.go:130] > ]
	I0815 00:58:14.712392   49465 command_runner.go:130] > privileged_without_host_devices = false
	I0815 00:58:14.712405   49465 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0815 00:58:14.712417   49465 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0815 00:58:14.712430   49465 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0815 00:58:14.712443   49465 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0815 00:58:14.712459   49465 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0815 00:58:14.712468   49465 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0815 00:58:14.712479   49465 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0815 00:58:14.712494   49465 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0815 00:58:14.712507   49465 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0815 00:58:14.712519   49465 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0815 00:58:14.712525   49465 command_runner.go:130] > # Example:
	I0815 00:58:14.712533   49465 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0815 00:58:14.712544   49465 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0815 00:58:14.712552   49465 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0815 00:58:14.712560   49465 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0815 00:58:14.712563   49465 command_runner.go:130] > # cpuset = 0
	I0815 00:58:14.712568   49465 command_runner.go:130] > # cpushares = "0-1"
	I0815 00:58:14.712573   49465 command_runner.go:130] > # Where:
	I0815 00:58:14.712580   49465 command_runner.go:130] > # The workload name is workload-type.
	I0815 00:58:14.712591   49465 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0815 00:58:14.712600   49465 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0815 00:58:14.712608   49465 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0815 00:58:14.712620   49465 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0815 00:58:14.712629   49465 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0815 00:58:14.712637   49465 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0815 00:58:14.712645   49465 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0815 00:58:14.712650   49465 command_runner.go:130] > # Default value is set to true
	I0815 00:58:14.712669   49465 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0815 00:58:14.712679   49465 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0815 00:58:14.712686   49465 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0815 00:58:14.712693   49465 command_runner.go:130] > # Default value is set to 'false'
	I0815 00:58:14.712700   49465 command_runner.go:130] > # disable_hostport_mapping = false
	I0815 00:58:14.712710   49465 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0815 00:58:14.712715   49465 command_runner.go:130] > #
	I0815 00:58:14.712724   49465 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0815 00:58:14.712735   49465 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0815 00:58:14.712746   49465 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0815 00:58:14.712759   49465 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0815 00:58:14.712775   49465 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0815 00:58:14.712784   49465 command_runner.go:130] > [crio.image]
	I0815 00:58:14.712794   49465 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0815 00:58:14.712804   49465 command_runner.go:130] > # default_transport = "docker://"
	I0815 00:58:14.712816   49465 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0815 00:58:14.712825   49465 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0815 00:58:14.712836   49465 command_runner.go:130] > # global_auth_file = ""
	I0815 00:58:14.712847   49465 command_runner.go:130] > # The image used to instantiate infra containers.
	I0815 00:58:14.712855   49465 command_runner.go:130] > # This option supports live configuration reload.
	I0815 00:58:14.712867   49465 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0815 00:58:14.712880   49465 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0815 00:58:14.712891   49465 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0815 00:58:14.712902   49465 command_runner.go:130] > # This option supports live configuration reload.
	I0815 00:58:14.712916   49465 command_runner.go:130] > # pause_image_auth_file = ""
	I0815 00:58:14.712925   49465 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0815 00:58:14.712937   49465 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0815 00:58:14.712950   49465 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0815 00:58:14.712961   49465 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0815 00:58:14.712971   49465 command_runner.go:130] > # pause_command = "/pause"
	I0815 00:58:14.712984   49465 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0815 00:58:14.712995   49465 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0815 00:58:14.713006   49465 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0815 00:58:14.713019   49465 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0815 00:58:14.713030   49465 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0815 00:58:14.713043   49465 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0815 00:58:14.713053   49465 command_runner.go:130] > # pinned_images = [
	I0815 00:58:14.713061   49465 command_runner.go:130] > # ]
	I0815 00:58:14.713071   49465 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0815 00:58:14.713083   49465 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0815 00:58:14.713095   49465 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0815 00:58:14.713105   49465 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0815 00:58:14.713115   49465 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0815 00:58:14.713124   49465 command_runner.go:130] > # signature_policy = ""
	I0815 00:58:14.713137   49465 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0815 00:58:14.713150   49465 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0815 00:58:14.713163   49465 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0815 00:58:14.713174   49465 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0815 00:58:14.713185   49465 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0815 00:58:14.713192   49465 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0815 00:58:14.713202   49465 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0815 00:58:14.713214   49465 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0815 00:58:14.713225   49465 command_runner.go:130] > # changing them here.
	I0815 00:58:14.713239   49465 command_runner.go:130] > # insecure_registries = [
	I0815 00:58:14.713247   49465 command_runner.go:130] > # ]
	I0815 00:58:14.713258   49465 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0815 00:58:14.713268   49465 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0815 00:58:14.713277   49465 command_runner.go:130] > # image_volumes = "mkdir"
	I0815 00:58:14.713285   49465 command_runner.go:130] > # Temporary directory to use for storing big files
	I0815 00:58:14.713290   49465 command_runner.go:130] > # big_files_temporary_dir = ""
	I0815 00:58:14.713310   49465 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0815 00:58:14.713320   49465 command_runner.go:130] > # CNI plugins.
	I0815 00:58:14.713326   49465 command_runner.go:130] > [crio.network]
	I0815 00:58:14.713339   49465 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0815 00:58:14.713350   49465 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0815 00:58:14.713359   49465 command_runner.go:130] > # cni_default_network = ""
	I0815 00:58:14.713371   49465 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0815 00:58:14.713381   49465 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0815 00:58:14.713390   49465 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0815 00:58:14.713395   49465 command_runner.go:130] > # plugin_dirs = [
	I0815 00:58:14.713400   49465 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0815 00:58:14.713409   49465 command_runner.go:130] > # ]
	I0815 00:58:14.713422   49465 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0815 00:58:14.713428   49465 command_runner.go:130] > [crio.metrics]
	I0815 00:58:14.713438   49465 command_runner.go:130] > # Globally enable or disable metrics support.
	I0815 00:58:14.713447   49465 command_runner.go:130] > enable_metrics = true
	I0815 00:58:14.713457   49465 command_runner.go:130] > # Specify enabled metrics collectors.
	I0815 00:58:14.713467   49465 command_runner.go:130] > # Per default all metrics are enabled.
	I0815 00:58:14.713478   49465 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0815 00:58:14.713488   49465 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0815 00:58:14.713496   49465 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0815 00:58:14.713503   49465 command_runner.go:130] > # metrics_collectors = [
	I0815 00:58:14.713512   49465 command_runner.go:130] > # 	"operations",
	I0815 00:58:14.713523   49465 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0815 00:58:14.713533   49465 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0815 00:58:14.713540   49465 command_runner.go:130] > # 	"operations_errors",
	I0815 00:58:14.713554   49465 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0815 00:58:14.713563   49465 command_runner.go:130] > # 	"image_pulls_by_name",
	I0815 00:58:14.713571   49465 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0815 00:58:14.713579   49465 command_runner.go:130] > # 	"image_pulls_failures",
	I0815 00:58:14.713583   49465 command_runner.go:130] > # 	"image_pulls_successes",
	I0815 00:58:14.713594   49465 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0815 00:58:14.713604   49465 command_runner.go:130] > # 	"image_layer_reuse",
	I0815 00:58:14.713612   49465 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0815 00:58:14.713622   49465 command_runner.go:130] > # 	"containers_oom_total",
	I0815 00:58:14.713631   49465 command_runner.go:130] > # 	"containers_oom",
	I0815 00:58:14.713639   49465 command_runner.go:130] > # 	"processes_defunct",
	I0815 00:58:14.713648   49465 command_runner.go:130] > # 	"operations_total",
	I0815 00:58:14.713658   49465 command_runner.go:130] > # 	"operations_latency_seconds",
	I0815 00:58:14.713666   49465 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0815 00:58:14.713673   49465 command_runner.go:130] > # 	"operations_errors_total",
	I0815 00:58:14.713678   49465 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0815 00:58:14.713687   49465 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0815 00:58:14.713697   49465 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0815 00:58:14.713707   49465 command_runner.go:130] > # 	"image_pulls_success_total",
	I0815 00:58:14.713720   49465 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0815 00:58:14.713729   49465 command_runner.go:130] > # 	"containers_oom_count_total",
	I0815 00:58:14.713739   49465 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0815 00:58:14.713749   49465 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0815 00:58:14.713756   49465 command_runner.go:130] > # ]
	I0815 00:58:14.713762   49465 command_runner.go:130] > # The port on which the metrics server will listen.
	I0815 00:58:14.713767   49465 command_runner.go:130] > # metrics_port = 9090
	I0815 00:58:14.713776   49465 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0815 00:58:14.713786   49465 command_runner.go:130] > # metrics_socket = ""
	I0815 00:58:14.713794   49465 command_runner.go:130] > # The certificate for the secure metrics server.
	I0815 00:58:14.713807   49465 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0815 00:58:14.713820   49465 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0815 00:58:14.713831   49465 command_runner.go:130] > # certificate on any modification event.
	I0815 00:58:14.713840   49465 command_runner.go:130] > # metrics_cert = ""
	I0815 00:58:14.713849   49465 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0815 00:58:14.713857   49465 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0815 00:58:14.713862   49465 command_runner.go:130] > # metrics_key = ""
	I0815 00:58:14.713873   49465 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0815 00:58:14.713883   49465 command_runner.go:130] > [crio.tracing]
	I0815 00:58:14.713892   49465 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0815 00:58:14.713902   49465 command_runner.go:130] > # enable_tracing = false
	I0815 00:58:14.713914   49465 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0815 00:58:14.713924   49465 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0815 00:58:14.713938   49465 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0815 00:58:14.713948   49465 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0815 00:58:14.713955   49465 command_runner.go:130] > # CRI-O NRI configuration.
	I0815 00:58:14.713959   49465 command_runner.go:130] > [crio.nri]
	I0815 00:58:14.713968   49465 command_runner.go:130] > # Globally enable or disable NRI.
	I0815 00:58:14.713975   49465 command_runner.go:130] > # enable_nri = false
	I0815 00:58:14.713985   49465 command_runner.go:130] > # NRI socket to listen on.
	I0815 00:58:14.713996   49465 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0815 00:58:14.714006   49465 command_runner.go:130] > # NRI plugin directory to use.
	I0815 00:58:14.714016   49465 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0815 00:58:14.714027   49465 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0815 00:58:14.714036   49465 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0815 00:58:14.714044   49465 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0815 00:58:14.714050   49465 command_runner.go:130] > # nri_disable_connections = false
	I0815 00:58:14.714061   49465 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0815 00:58:14.714072   49465 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0815 00:58:14.714083   49465 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0815 00:58:14.714092   49465 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0815 00:58:14.714103   49465 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0815 00:58:14.714111   49465 command_runner.go:130] > [crio.stats]
	I0815 00:58:14.714122   49465 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0815 00:58:14.714130   49465 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0815 00:58:14.714136   49465 command_runner.go:130] > # stats_collection_period = 0
	I0815 00:58:14.714257   49465 cni.go:84] Creating CNI manager for ""
	I0815 00:58:14.714267   49465 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 00:58:14.714278   49465 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:58:14.714317   49465 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.9 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-978269 NodeName:multinode-978269 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:58:14.714482   49465 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-978269"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:58:14.714561   49465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:58:14.724922   49465 command_runner.go:130] > kubeadm
	I0815 00:58:14.724937   49465 command_runner.go:130] > kubectl
	I0815 00:58:14.724943   49465 command_runner.go:130] > kubelet
	I0815 00:58:14.724965   49465 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:58:14.725022   49465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 00:58:14.734374   49465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0815 00:58:14.750683   49465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:58:14.765486   49465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0815 00:58:14.780055   49465 ssh_runner.go:195] Run: grep 192.168.39.9	control-plane.minikube.internal$ /etc/hosts
	I0815 00:58:14.783384   49465 command_runner.go:130] > 192.168.39.9	control-plane.minikube.internal
	I0815 00:58:14.783508   49465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:58:14.924473   49465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:58:14.939451   49465 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269 for IP: 192.168.39.9
	I0815 00:58:14.939483   49465 certs.go:194] generating shared ca certs ...
	I0815 00:58:14.939507   49465 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:58:14.939681   49465 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 00:58:14.939718   49465 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 00:58:14.939727   49465 certs.go:256] generating profile certs ...
	I0815 00:58:14.939857   49465 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/client.key
	I0815 00:58:14.939920   49465 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/apiserver.key.c466d5b3
	I0815 00:58:14.939953   49465 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/proxy-client.key
	I0815 00:58:14.939962   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 00:58:14.939974   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 00:58:14.939988   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 00:58:14.939997   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 00:58:14.940009   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 00:58:14.940022   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 00:58:14.940034   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 00:58:14.940044   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 00:58:14.940100   49465 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 00:58:14.940126   49465 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 00:58:14.940135   49465 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:58:14.940154   49465 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:58:14.940176   49465 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:58:14.940197   49465 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 00:58:14.940233   49465 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:58:14.940259   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem -> /usr/share/ca-certificates/20279.pem
	I0815 00:58:14.940272   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /usr/share/ca-certificates/202792.pem
	I0815 00:58:14.940282   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:58:14.940889   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:58:14.964211   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:58:14.985865   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:58:15.007315   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:58:15.029270   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 00:58:15.051111   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:58:15.072419   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:58:15.093667   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 00:58:15.114558   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 00:58:15.135570   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 00:58:15.157027   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:58:15.178377   49465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:58:15.193831   49465 ssh_runner.go:195] Run: openssl version
	I0815 00:58:15.199414   49465 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0815 00:58:15.199487   49465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 00:58:15.209877   49465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 00:58:15.213935   49465 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 00:58:15.213990   49465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 00:58:15.214037   49465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 00:58:15.219265   49465 command_runner.go:130] > 3ec20f2e
	I0815 00:58:15.219354   49465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 00:58:15.228613   49465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:58:15.238733   49465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:58:15.242711   49465 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:58:15.242745   49465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:58:15.242791   49465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:58:15.248478   49465 command_runner.go:130] > b5213941
	I0815 00:58:15.248534   49465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:58:15.257272   49465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 00:58:15.267252   49465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 00:58:15.271451   49465 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 00:58:15.271474   49465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 00:58:15.271515   49465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 00:58:15.276599   49465 command_runner.go:130] > 51391683
	I0815 00:58:15.276765   49465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 00:58:15.285604   49465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:58:15.289717   49465 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:58:15.289734   49465 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0815 00:58:15.289740   49465 command_runner.go:130] > Device: 253,1	Inode: 3150358     Links: 1
	I0815 00:58:15.289746   49465 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0815 00:58:15.289754   49465 command_runner.go:130] > Access: 2024-08-15 00:51:32.829729222 +0000
	I0815 00:58:15.289759   49465 command_runner.go:130] > Modify: 2024-08-15 00:51:32.829729222 +0000
	I0815 00:58:15.289765   49465 command_runner.go:130] > Change: 2024-08-15 00:51:32.829729222 +0000
	I0815 00:58:15.289773   49465 command_runner.go:130] >  Birth: 2024-08-15 00:51:32.829729222 +0000
	I0815 00:58:15.289834   49465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 00:58:15.294826   49465 command_runner.go:130] > Certificate will not expire
	I0815 00:58:15.294984   49465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 00:58:15.299900   49465 command_runner.go:130] > Certificate will not expire
	I0815 00:58:15.300072   49465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 00:58:15.304969   49465 command_runner.go:130] > Certificate will not expire
	I0815 00:58:15.305134   49465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 00:58:15.310062   49465 command_runner.go:130] > Certificate will not expire
	I0815 00:58:15.310108   49465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 00:58:15.314921   49465 command_runner.go:130] > Certificate will not expire
	I0815 00:58:15.315104   49465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 00:58:15.320056   49465 command_runner.go:130] > Certificate will not expire
	I0815 00:58:15.320125   49465 kubeadm.go:392] StartCluster: {Name:multinode-978269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-978269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.147 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:58:15.320256   49465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 00:58:15.320295   49465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:58:15.360144   49465 command_runner.go:130] > 8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051
	I0815 00:58:15.360169   49465 command_runner.go:130] > 340a1428c9abff824f0bb4ecd2c9711c6cc39828885cfa0cd4e220850cc17e80
	I0815 00:58:15.360179   49465 command_runner.go:130] > 22e4139a30c48f640d8e98f1ba952283af88959631c1f2342cea281b3bde60ad
	I0815 00:58:15.360190   49465 command_runner.go:130] > 8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e
	I0815 00:58:15.360202   49465 command_runner.go:130] > d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d
	I0815 00:58:15.360213   49465 command_runner.go:130] > a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff
	I0815 00:58:15.360224   49465 command_runner.go:130] > 5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063
	I0815 00:58:15.360239   49465 command_runner.go:130] > 1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9
	I0815 00:58:15.360250   49465 command_runner.go:130] > 60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b
	I0815 00:58:15.360280   49465 cri.go:89] found id: "8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051"
	I0815 00:58:15.360291   49465 cri.go:89] found id: "340a1428c9abff824f0bb4ecd2c9711c6cc39828885cfa0cd4e220850cc17e80"
	I0815 00:58:15.360298   49465 cri.go:89] found id: "22e4139a30c48f640d8e98f1ba952283af88959631c1f2342cea281b3bde60ad"
	I0815 00:58:15.360306   49465 cri.go:89] found id: "8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e"
	I0815 00:58:15.360310   49465 cri.go:89] found id: "d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d"
	I0815 00:58:15.360318   49465 cri.go:89] found id: "a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff"
	I0815 00:58:15.360322   49465 cri.go:89] found id: "5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063"
	I0815 00:58:15.360329   49465 cri.go:89] found id: "1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9"
	I0815 00:58:15.360334   49465 cri.go:89] found id: "60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b"
	I0815 00:58:15.360341   49465 cri.go:89] found id: ""
	I0815 00:58:15.360385   49465 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.320359494Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683602320338039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2350e26e-1718-41f0-b515-b2d4b95ff16e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.320867271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7882a1c-c274-48a1-bc15-bb6c52fb3237 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.320931427Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7882a1c-c274-48a1-bc15-bb6c52fb3237 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.321302774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:120eb7a5322b4daf2ee1a0cfb9b63388cdfc4e469a5db10b84f10cf47c8d5254,PodSandboxId:16ad6434f062d6d50485494821593edf7dbf293221c7f278c3042dcd0388648b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723683536059232592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c4909f1088272f99373d9c6c535612dcbc5a9280a4248f7612cd2b871ed27d,PodSandboxId:aca1c8c059dc6fcc588bbf8a022ec41988aa33965b94e573f32106f448f433ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723683502548612310,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcf1beb1bc92cebc59ec3fcd8e8188a7715e034929c6e140a15f8f1607b21eb,PodSandboxId:84d0e2e7ed71f2d746c72da4542331af3b3d3f6c8a6650a6004d930f3b58eb02,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723683502453590261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8808d72fd47a8f13ba4db52121147025d9a43d98ae4dd12cb82e5f1d4fb953,PodSandboxId:4bc92df2419c1400d0fdebc5b09f113e30dc6c167b9c1af0641b31262f2a0f8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723683502417139789,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee3bcd285e9df7e4bb10e968ec4c925393549948ecec928932893c721b7ee5e,PodSandboxId:e28fe438bc0c258d027aa48b1707ad1ae448518e5164c0f95e295121dea83d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723683502318836723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d77133fc7b4e846c266aa900382bffd31131ad078c4c09a793ed9d21fd1f8cfc,PodSandboxId:80cf7a8ac2d8c2b926374fc91fc186f68b48b07c0a66d7444367b8f8909680f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723683497518771726,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faada8a4242393b05c2a0a978a64346c85fa05eb86647a47d7f96d44ea8591c8,PodSandboxId:38c963d11d6ca2eb4aeb24b07e5a3e82900ec2d0f28e1c9972d9aad17e0648fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723683497512495149,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef69db1b2a37fbdaf3f2bd7f4a9cc02236af37964017d8ec990faa80544d03a8,PodSandboxId:0138fd75175495a00c5ac5d424db95d085871855ec0538bba7b7cc89c8d7e788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723683497477648607,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e855a6e97f20c22d0ce060992e1912bff0aacd36cc3a800b3a287f2648d7556c,PodSandboxId:07367e8e3488ffbf080d4e38bab34939266a0f944a4ee6404505d6d244ea1942,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723683497423524388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051,PodSandboxId:a1d0190337c10341a25c9d5d3159cbc924fe66561dd8810c1b8b820f1822419d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723683494085462147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:800515c9ab5a8951cb047cfe97b369811eb85f1d6608c5e5a3abd71d37f2827f,PodSandboxId:6b4d4b0ac1a32ec18d3987e1ad8ca4f1ff7ee235af55ffedd49905c34e1f0113,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723683177240946025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e4139a30c48f640d8e98f1ba952283af88959631c1f2342cea281b3bde60ad,PodSandboxId:e349553d11879763183387850a348109f53da17bd7a3bb4566e73e1d4c6f5a3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723683122490434213,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e,PodSandboxId:a93c061b3b0563c6f9077505cb45eaa972c012f6ef7373c32a29f5bbe2fb8377,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723683110885743702,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d,PodSandboxId:2eafab9d119accedfaed33a30f78d3401d2714e84fbb17f08afa2a3cd5743e79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723683107484957880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff,PodSandboxId:0ffa578248454e7c2ca3dd67bf1d25e222119114f8dabc823007271919e12aa0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723683096690245393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063,PodSandboxId:a1e7e4c32d43de14e34587e1e59366bc206a64252ed8430822be9c131a9dba8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723683096687056390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92
aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b,PodSandboxId:a58ecc268ed541798a0064360e5f94dad6cfb94d0187de75659f35d14015daee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723683096594819224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9,PodSandboxId:a5e805766ccb471132d7e0afe8d3b80c5f55f54cfd921f8eedfd4c685cc90f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723683096637980891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7882a1c-c274-48a1-bc15-bb6c52fb3237 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.365885655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5bbde0a-6e28-4ca7-b25b-5f6b1d9b6166 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.365974842Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5bbde0a-6e28-4ca7-b25b-5f6b1d9b6166 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.369262431Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=882626c7-ffd9-48cd-ab4f-858928d97bee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.369673460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683602369649973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=882626c7-ffd9-48cd-ab4f-858928d97bee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.370238226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee38d930-60a6-421f-8255-e58027f8e749 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.370302462Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee38d930-60a6-421f-8255-e58027f8e749 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.370657474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:120eb7a5322b4daf2ee1a0cfb9b63388cdfc4e469a5db10b84f10cf47c8d5254,PodSandboxId:16ad6434f062d6d50485494821593edf7dbf293221c7f278c3042dcd0388648b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723683536059232592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c4909f1088272f99373d9c6c535612dcbc5a9280a4248f7612cd2b871ed27d,PodSandboxId:aca1c8c059dc6fcc588bbf8a022ec41988aa33965b94e573f32106f448f433ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723683502548612310,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcf1beb1bc92cebc59ec3fcd8e8188a7715e034929c6e140a15f8f1607b21eb,PodSandboxId:84d0e2e7ed71f2d746c72da4542331af3b3d3f6c8a6650a6004d930f3b58eb02,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723683502453590261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8808d72fd47a8f13ba4db52121147025d9a43d98ae4dd12cb82e5f1d4fb953,PodSandboxId:4bc92df2419c1400d0fdebc5b09f113e30dc6c167b9c1af0641b31262f2a0f8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723683502417139789,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee3bcd285e9df7e4bb10e968ec4c925393549948ecec928932893c721b7ee5e,PodSandboxId:e28fe438bc0c258d027aa48b1707ad1ae448518e5164c0f95e295121dea83d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723683502318836723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d77133fc7b4e846c266aa900382bffd31131ad078c4c09a793ed9d21fd1f8cfc,PodSandboxId:80cf7a8ac2d8c2b926374fc91fc186f68b48b07c0a66d7444367b8f8909680f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723683497518771726,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faada8a4242393b05c2a0a978a64346c85fa05eb86647a47d7f96d44ea8591c8,PodSandboxId:38c963d11d6ca2eb4aeb24b07e5a3e82900ec2d0f28e1c9972d9aad17e0648fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723683497512495149,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef69db1b2a37fbdaf3f2bd7f4a9cc02236af37964017d8ec990faa80544d03a8,PodSandboxId:0138fd75175495a00c5ac5d424db95d085871855ec0538bba7b7cc89c8d7e788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723683497477648607,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e855a6e97f20c22d0ce060992e1912bff0aacd36cc3a800b3a287f2648d7556c,PodSandboxId:07367e8e3488ffbf080d4e38bab34939266a0f944a4ee6404505d6d244ea1942,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723683497423524388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051,PodSandboxId:a1d0190337c10341a25c9d5d3159cbc924fe66561dd8810c1b8b820f1822419d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723683494085462147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:800515c9ab5a8951cb047cfe97b369811eb85f1d6608c5e5a3abd71d37f2827f,PodSandboxId:6b4d4b0ac1a32ec18d3987e1ad8ca4f1ff7ee235af55ffedd49905c34e1f0113,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723683177240946025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e4139a30c48f640d8e98f1ba952283af88959631c1f2342cea281b3bde60ad,PodSandboxId:e349553d11879763183387850a348109f53da17bd7a3bb4566e73e1d4c6f5a3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723683122490434213,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e,PodSandboxId:a93c061b3b0563c6f9077505cb45eaa972c012f6ef7373c32a29f5bbe2fb8377,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723683110885743702,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d,PodSandboxId:2eafab9d119accedfaed33a30f78d3401d2714e84fbb17f08afa2a3cd5743e79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723683107484957880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff,PodSandboxId:0ffa578248454e7c2ca3dd67bf1d25e222119114f8dabc823007271919e12aa0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723683096690245393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063,PodSandboxId:a1e7e4c32d43de14e34587e1e59366bc206a64252ed8430822be9c131a9dba8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723683096687056390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92
aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b,PodSandboxId:a58ecc268ed541798a0064360e5f94dad6cfb94d0187de75659f35d14015daee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723683096594819224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9,PodSandboxId:a5e805766ccb471132d7e0afe8d3b80c5f55f54cfd921f8eedfd4c685cc90f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723683096637980891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee38d930-60a6-421f-8255-e58027f8e749 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.414373352Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1a31472-f510-4fdb-90df-d0d40840e436 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.414454844Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1a31472-f510-4fdb-90df-d0d40840e436 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.415545956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65fde02f-4fb3-4847-ac19-d30d706c6bc4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.416145886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683602416120546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65fde02f-4fb3-4847-ac19-d30d706c6bc4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.416589415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efe64521-c364-4c75-9530-8b652f4f61ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.416657156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efe64521-c364-4c75-9530-8b652f4f61ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.416997107Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:120eb7a5322b4daf2ee1a0cfb9b63388cdfc4e469a5db10b84f10cf47c8d5254,PodSandboxId:16ad6434f062d6d50485494821593edf7dbf293221c7f278c3042dcd0388648b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723683536059232592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c4909f1088272f99373d9c6c535612dcbc5a9280a4248f7612cd2b871ed27d,PodSandboxId:aca1c8c059dc6fcc588bbf8a022ec41988aa33965b94e573f32106f448f433ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723683502548612310,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcf1beb1bc92cebc59ec3fcd8e8188a7715e034929c6e140a15f8f1607b21eb,PodSandboxId:84d0e2e7ed71f2d746c72da4542331af3b3d3f6c8a6650a6004d930f3b58eb02,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723683502453590261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8808d72fd47a8f13ba4db52121147025d9a43d98ae4dd12cb82e5f1d4fb953,PodSandboxId:4bc92df2419c1400d0fdebc5b09f113e30dc6c167b9c1af0641b31262f2a0f8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723683502417139789,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee3bcd285e9df7e4bb10e968ec4c925393549948ecec928932893c721b7ee5e,PodSandboxId:e28fe438bc0c258d027aa48b1707ad1ae448518e5164c0f95e295121dea83d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723683502318836723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d77133fc7b4e846c266aa900382bffd31131ad078c4c09a793ed9d21fd1f8cfc,PodSandboxId:80cf7a8ac2d8c2b926374fc91fc186f68b48b07c0a66d7444367b8f8909680f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723683497518771726,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faada8a4242393b05c2a0a978a64346c85fa05eb86647a47d7f96d44ea8591c8,PodSandboxId:38c963d11d6ca2eb4aeb24b07e5a3e82900ec2d0f28e1c9972d9aad17e0648fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723683497512495149,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef69db1b2a37fbdaf3f2bd7f4a9cc02236af37964017d8ec990faa80544d03a8,PodSandboxId:0138fd75175495a00c5ac5d424db95d085871855ec0538bba7b7cc89c8d7e788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723683497477648607,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e855a6e97f20c22d0ce060992e1912bff0aacd36cc3a800b3a287f2648d7556c,PodSandboxId:07367e8e3488ffbf080d4e38bab34939266a0f944a4ee6404505d6d244ea1942,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723683497423524388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051,PodSandboxId:a1d0190337c10341a25c9d5d3159cbc924fe66561dd8810c1b8b820f1822419d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723683494085462147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:800515c9ab5a8951cb047cfe97b369811eb85f1d6608c5e5a3abd71d37f2827f,PodSandboxId:6b4d4b0ac1a32ec18d3987e1ad8ca4f1ff7ee235af55ffedd49905c34e1f0113,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723683177240946025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e4139a30c48f640d8e98f1ba952283af88959631c1f2342cea281b3bde60ad,PodSandboxId:e349553d11879763183387850a348109f53da17bd7a3bb4566e73e1d4c6f5a3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723683122490434213,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e,PodSandboxId:a93c061b3b0563c6f9077505cb45eaa972c012f6ef7373c32a29f5bbe2fb8377,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723683110885743702,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d,PodSandboxId:2eafab9d119accedfaed33a30f78d3401d2714e84fbb17f08afa2a3cd5743e79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723683107484957880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff,PodSandboxId:0ffa578248454e7c2ca3dd67bf1d25e222119114f8dabc823007271919e12aa0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723683096690245393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063,PodSandboxId:a1e7e4c32d43de14e34587e1e59366bc206a64252ed8430822be9c131a9dba8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723683096687056390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92
aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b,PodSandboxId:a58ecc268ed541798a0064360e5f94dad6cfb94d0187de75659f35d14015daee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723683096594819224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9,PodSandboxId:a5e805766ccb471132d7e0afe8d3b80c5f55f54cfd921f8eedfd4c685cc90f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723683096637980891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efe64521-c364-4c75-9530-8b652f4f61ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.456806579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aaaf73dd-9176-43c9-855d-03e33de55897 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.456898432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aaaf73dd-9176-43c9-855d-03e33de55897 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.458436624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f894f7a1-e49e-4911-8e27-2fd860d4a1f3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.459023217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683602458997146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f894f7a1-e49e-4911-8e27-2fd860d4a1f3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.459546880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c42a659-cf97-4556-85bc-8d33a5a21ee8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.459727653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c42a659-cf97-4556-85bc-8d33a5a21ee8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:00:02 multinode-978269 crio[2868]: time="2024-08-15 01:00:02.460123512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:120eb7a5322b4daf2ee1a0cfb9b63388cdfc4e469a5db10b84f10cf47c8d5254,PodSandboxId:16ad6434f062d6d50485494821593edf7dbf293221c7f278c3042dcd0388648b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723683536059232592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c4909f1088272f99373d9c6c535612dcbc5a9280a4248f7612cd2b871ed27d,PodSandboxId:aca1c8c059dc6fcc588bbf8a022ec41988aa33965b94e573f32106f448f433ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723683502548612310,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcf1beb1bc92cebc59ec3fcd8e8188a7715e034929c6e140a15f8f1607b21eb,PodSandboxId:84d0e2e7ed71f2d746c72da4542331af3b3d3f6c8a6650a6004d930f3b58eb02,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723683502453590261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8808d72fd47a8f13ba4db52121147025d9a43d98ae4dd12cb82e5f1d4fb953,PodSandboxId:4bc92df2419c1400d0fdebc5b09f113e30dc6c167b9c1af0641b31262f2a0f8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723683502417139789,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee3bcd285e9df7e4bb10e968ec4c925393549948ecec928932893c721b7ee5e,PodSandboxId:e28fe438bc0c258d027aa48b1707ad1ae448518e5164c0f95e295121dea83d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723683502318836723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d77133fc7b4e846c266aa900382bffd31131ad078c4c09a793ed9d21fd1f8cfc,PodSandboxId:80cf7a8ac2d8c2b926374fc91fc186f68b48b07c0a66d7444367b8f8909680f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723683497518771726,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faada8a4242393b05c2a0a978a64346c85fa05eb86647a47d7f96d44ea8591c8,PodSandboxId:38c963d11d6ca2eb4aeb24b07e5a3e82900ec2d0f28e1c9972d9aad17e0648fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723683497512495149,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef69db1b2a37fbdaf3f2bd7f4a9cc02236af37964017d8ec990faa80544d03a8,PodSandboxId:0138fd75175495a00c5ac5d424db95d085871855ec0538bba7b7cc89c8d7e788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723683497477648607,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e855a6e97f20c22d0ce060992e1912bff0aacd36cc3a800b3a287f2648d7556c,PodSandboxId:07367e8e3488ffbf080d4e38bab34939266a0f944a4ee6404505d6d244ea1942,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723683497423524388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051,PodSandboxId:a1d0190337c10341a25c9d5d3159cbc924fe66561dd8810c1b8b820f1822419d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723683494085462147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:800515c9ab5a8951cb047cfe97b369811eb85f1d6608c5e5a3abd71d37f2827f,PodSandboxId:6b4d4b0ac1a32ec18d3987e1ad8ca4f1ff7ee235af55ffedd49905c34e1f0113,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723683177240946025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e4139a30c48f640d8e98f1ba952283af88959631c1f2342cea281b3bde60ad,PodSandboxId:e349553d11879763183387850a348109f53da17bd7a3bb4566e73e1d4c6f5a3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723683122490434213,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e,PodSandboxId:a93c061b3b0563c6f9077505cb45eaa972c012f6ef7373c32a29f5bbe2fb8377,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723683110885743702,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d,PodSandboxId:2eafab9d119accedfaed33a30f78d3401d2714e84fbb17f08afa2a3cd5743e79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723683107484957880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff,PodSandboxId:0ffa578248454e7c2ca3dd67bf1d25e222119114f8dabc823007271919e12aa0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723683096690245393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063,PodSandboxId:a1e7e4c32d43de14e34587e1e59366bc206a64252ed8430822be9c131a9dba8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723683096687056390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92
aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b,PodSandboxId:a58ecc268ed541798a0064360e5f94dad6cfb94d0187de75659f35d14015daee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723683096594819224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9,PodSandboxId:a5e805766ccb471132d7e0afe8d3b80c5f55f54cfd921f8eedfd4c685cc90f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723683096637980891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c42a659-cf97-4556-85bc-8d33a5a21ee8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	120eb7a5322b4       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   16ad6434f062d       busybox-7dff88458-7t6jw
	48c4909f10882       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   aca1c8c059dc6       kindnet-jtg5x
	4fcf1beb1bc92       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   84d0e2e7ed71f       coredns-6f6b679f8f-z2fdx
	8c8808d72fd47       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   4bc92df2419c1       kube-proxy-9dv78
	3ee3bcd285e9d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   e28fe438bc0c2       storage-provisioner
	d77133fc7b4e8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   80cf7a8ac2d8c       etcd-multinode-978269
	faada8a424239       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   38c963d11d6ca       kube-scheduler-multinode-978269
	ef69db1b2a37f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   0138fd7517549       kube-controller-manager-multinode-978269
	e855a6e97f20c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   07367e8e3488f       kube-apiserver-multinode-978269
	8bd26deb668b8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   a1d0190337c10       coredns-6f6b679f8f-z2fdx
	800515c9ab5a8       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   6b4d4b0ac1a32       busybox-7dff88458-7t6jw
	22e4139a30c48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   e349553d11879       storage-provisioner
	8f29be96a4aa4       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   a93c061b3b056       kindnet-jtg5x
	d84e329513e70       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   2eafab9d119ac       kube-proxy-9dv78
	a0e3afa8b91de       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   0ffa578248454       kube-scheduler-multinode-978269
	5a6497a8901c2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   a1e7e4c32d43d       kube-apiserver-multinode-978269
	1295ded1643dc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   a5e805766ccb4       kube-controller-manager-multinode-978269
	60d7fb737c967       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   a58ecc268ed54       etcd-multinode-978269
	
	
	==> coredns [4fcf1beb1bc92cebc59ec3fcd8e8188a7715e034929c6e140a15f8f1607b21eb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42588 - 4420 "HINFO IN 424660939412603124.5981377023232911938. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011763689s
	
	
	==> coredns [8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051] <==
	
	
	==> describe nodes <==
	Name:               multinode-978269
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-978269
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=multinode-978269
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_51_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:51:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-978269
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:00:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:58:21 +0000   Thu, 15 Aug 2024 00:51:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:58:21 +0000   Thu, 15 Aug 2024 00:51:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:58:21 +0000   Thu, 15 Aug 2024 00:51:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:58:21 +0000   Thu, 15 Aug 2024 00:52:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    multinode-978269
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 011be81033174bab9baea31821c8cceb
	  System UUID:                011be810-3317-4bab-9bae-a31821c8cceb
	  Boot ID:                    321329e1-47f2-4460-8db4-7c9aee80ba74
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7t6jw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 coredns-6f6b679f8f-z2fdx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m15s
	  kube-system                 etcd-multinode-978269                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m21s
	  kube-system                 kindnet-jtg5x                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m16s
	  kube-system                 kube-apiserver-multinode-978269             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 kube-controller-manager-multinode-978269    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 kube-proxy-9dv78                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-scheduler-multinode-978269             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m14s                kube-proxy       
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m21s                kubelet          Node multinode-978269 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m21s                kubelet          Node multinode-978269 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m21s                kubelet          Node multinode-978269 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m21s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m16s                node-controller  Node multinode-978269 event: Registered Node multinode-978269 in Controller
	  Normal  NodeReady                8m                   kubelet          Node multinode-978269 status is now: NodeReady
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s (x8 over 106s)  kubelet          Node multinode-978269 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 106s)  kubelet          Node multinode-978269 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 106s)  kubelet          Node multinode-978269 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           98s                  node-controller  Node multinode-978269 event: Registered Node multinode-978269 in Controller
	
	
	Name:               multinode-978269-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-978269-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=multinode-978269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_59_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:59:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-978269-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 00:59:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:59:32 +0000   Thu, 15 Aug 2024 00:59:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:59:32 +0000   Thu, 15 Aug 2024 00:59:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:59:32 +0000   Thu, 15 Aug 2024 00:59:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:59:32 +0000   Thu, 15 Aug 2024 00:59:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    multinode-978269-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 feb260eb0f094c598c21db9a6f456d5b
	  System UUID:                feb260eb-0f09-4c59-8c21-db9a6f456d5b
	  Boot ID:                    aae55ab2-4686-4046-9a2b-85273ca11b87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wcqhk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-p5zrg              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m30s
	  kube-system                 kube-proxy-mstc7           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m25s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m30s (x2 over 7m31s)  kubelet     Node multinode-978269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m30s (x2 over 7m31s)  kubelet     Node multinode-978269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m30s (x2 over 7m31s)  kubelet     Node multinode-978269-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m11s                  kubelet     Node multinode-978269-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  60s (x2 over 61s)      kubelet     Node multinode-978269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 61s)      kubelet     Node multinode-978269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 61s)      kubelet     Node multinode-978269-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                41s                    kubelet     Node multinode-978269-m02 status is now: NodeReady
	
	
	Name:               multinode-978269-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-978269-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=multinode-978269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_59_41_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:59:40 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-978269-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:00:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:59:59 +0000   Thu, 15 Aug 2024 00:59:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:59:59 +0000   Thu, 15 Aug 2024 00:59:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:59:59 +0000   Thu, 15 Aug 2024 00:59:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:59:59 +0000   Thu, 15 Aug 2024 00:59:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    multinode-978269-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ac165d1622a4ef598b89f5aa7b3b085
	  System UUID:                4ac165d1-622a-4ef5-98b8-9f5aa7b3b085
	  Boot ID:                    2628eaf3-d2f5-470c-9202-87d712dc01ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-qn9xq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m35s
	  kube-system                 kube-proxy-sj276    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m35s (x2 over 6m35s)  kubelet          Node multinode-978269-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x2 over 6m35s)  kubelet          Node multinode-978269-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x2 over 6m35s)  kubelet          Node multinode-978269-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m14s                  kubelet          Node multinode-978269-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet          Node multinode-978269-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet          Node multinode-978269-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet          Node multinode-978269-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m26s                  kubelet          Node multinode-978269-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet          Node multinode-978269-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet          Node multinode-978269-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet          Node multinode-978269-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                    node-controller  Node multinode-978269-m03 event: Registered Node multinode-978269-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-978269-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055972] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054978] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.161102] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.126423] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.270525] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.756944] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.826463] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.060576] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.993565] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[  +0.072326] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.120277] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.100496] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.044882] kauditd_printk_skb: 68 callbacks suppressed
	[Aug15 00:52] kauditd_printk_skb: 14 callbacks suppressed
	[Aug15 00:58] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[  +0.140718] systemd-fstab-generator[2698]: Ignoring "noauto" option for root device
	[  +0.160625] systemd-fstab-generator[2712]: Ignoring "noauto" option for root device
	[  +0.146252] systemd-fstab-generator[2724]: Ignoring "noauto" option for root device
	[  +0.357448] systemd-fstab-generator[2805]: Ignoring "noauto" option for root device
	[  +0.752274] systemd-fstab-generator[2976]: Ignoring "noauto" option for root device
	[  +1.759861] systemd-fstab-generator[3099]: Ignoring "noauto" option for root device
	[  +5.644772] kauditd_printk_skb: 196 callbacks suppressed
	[  +6.561631] kauditd_printk_skb: 34 callbacks suppressed
	[  +8.849715] systemd-fstab-generator[3956]: Ignoring "noauto" option for root device
	[ +18.385785] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b] <==
	{"level":"warn","ts":"2024-08-15T00:52:31.903474Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.910054ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:52:31.903586Z","caller":"traceutil/trace.go:171","msg":"trace[1043404670] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:442; }","duration":"126.098711ms","start":"2024-08-15T00:52:31.777466Z","end":"2024-08-15T00:52:31.903565Z","steps":["trace[1043404670] 'range keys from in-memory index tree'  (duration: 125.886659ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:52:31.903718Z","caller":"traceutil/trace.go:171","msg":"trace[1405404509] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"211.727279ms","start":"2024-08-15T00:52:31.691982Z","end":"2024-08-15T00:52:31.903709Z","steps":["trace[1405404509] 'process raft request'  (duration: 136.334786ms)","trace[1405404509] 'compare'  (duration: 74.660063ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:52:37.951499Z","caller":"traceutil/trace.go:171","msg":"trace[1970639582] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"142.515217ms","start":"2024-08-15T00:52:37.808963Z","end":"2024-08-15T00:52:37.951478Z","steps":["trace[1970639582] 'process raft request'  (duration: 142.387713ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:52:38.262259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.560521ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6583015068228233705 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:457 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T00:52:38.262406Z","caller":"traceutil/trace.go:171","msg":"trace[1249122724] linearizableReadLoop","detail":"{readStateIndex:504; appliedIndex:503; }","duration":"137.737379ms","start":"2024-08-15T00:52:38.124657Z","end":"2024-08-15T00:52:38.262395Z","steps":["trace[1249122724] 'read index received'  (duration: 35.3295ms)","trace[1249122724] 'applied index is now lower than readState.Index'  (duration: 102.406739ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:52:38.262505Z","caller":"traceutil/trace.go:171","msg":"trace[1274586763] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"303.648575ms","start":"2024-08-15T00:52:37.958844Z","end":"2024-08-15T00:52:38.262493Z","steps":["trace[1274586763] 'process raft request'  (duration: 201.349685ms)","trace[1274586763] 'compare'  (duration: 101.428191ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T00:52:38.262597Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.931681ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-978269-m02\" ","response":"range_response_count:1 size:2887"}
	{"level":"warn","ts":"2024-08-15T00:52:38.262607Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:52:37.958826Z","time spent":"303.735227ms","remote":"127.0.0.1:57356","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:457 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"info","ts":"2024-08-15T00:52:38.262637Z","caller":"traceutil/trace.go:171","msg":"trace[1723559130] range","detail":"{range_begin:/registry/minions/multinode-978269-m02; range_end:; response_count:1; response_revision:481; }","duration":"137.976275ms","start":"2024-08-15T00:52:38.124654Z","end":"2024-08-15T00:52:38.262630Z","steps":["trace[1723559130] 'agreement among raft nodes before linearized reading'  (duration: 137.873227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:53:27.784585Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.164456ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6583015068228234127 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-978269-m03.17ebc0beaa74d324\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-978269-m03.17ebc0beaa74d324\" value_size:642 lease:6583015068228233819 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T00:53:27.784756Z","caller":"traceutil/trace.go:171","msg":"trace[237720874] linearizableReadLoop","detail":"{readStateIndex:612; appliedIndex:611; }","duration":"134.319766ms","start":"2024-08-15T00:53:27.650419Z","end":"2024-08-15T00:53:27.784739Z","steps":["trace[237720874] 'read index received'  (duration: 3.824125ms)","trace[237720874] 'applied index is now lower than readState.Index'  (duration: 130.494928ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:53:27.784829Z","caller":"traceutil/trace.go:171","msg":"trace[305826630] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"212.090036ms","start":"2024-08-15T00:53:27.572726Z","end":"2024-08-15T00:53:27.784816Z","steps":["trace[305826630] 'process raft request'  (duration: 81.588395ms)","trace[305826630] 'compare'  (duration: 130.077002ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T00:53:27.785060Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.632125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-978269-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:53:27.785095Z","caller":"traceutil/trace.go:171","msg":"trace[192360410] range","detail":"{range_begin:/registry/minions/multinode-978269-m03; range_end:; response_count:0; response_revision:578; }","duration":"134.673362ms","start":"2024-08-15T00:53:27.650415Z","end":"2024-08-15T00:53:27.785088Z","steps":["trace[192360410] 'agreement among raft nodes before linearized reading'  (duration: 134.617915ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:56:42.025316Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-15T00:56:42.025465Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-978269","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.9:2380"],"advertise-client-urls":["https://192.168.39.9:2379"]}
	{"level":"warn","ts":"2024-08-15T00:56:42.025613Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T00:56:42.025715Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T00:56:42.084897Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.9:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T00:56:42.084985Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.9:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T00:56:42.085073Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e6c05fccff8d5b5b","current-leader-member-id":"e6c05fccff8d5b5b"}
	{"level":"info","ts":"2024-08-15T00:56:42.088370Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.9:2380"}
	{"level":"info","ts":"2024-08-15T00:56:42.088494Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.9:2380"}
	{"level":"info","ts":"2024-08-15T00:56:42.088514Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-978269","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.9:2380"],"advertise-client-urls":["https://192.168.39.9:2379"]}
	
	
	==> etcd [d77133fc7b4e846c266aa900382bffd31131ad078c4c09a793ed9d21fd1f8cfc] <==
	{"level":"info","ts":"2024-08-15T00:58:17.963578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b switched to configuration voters=(16627395158317292379)"}
	{"level":"info","ts":"2024-08-15T00:58:17.963668Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e83eb6b012f1d297","local-member-id":"e6c05fccff8d5b5b","added-peer-id":"e6c05fccff8d5b5b","added-peer-peer-urls":["https://192.168.39.9:2380"]}
	{"level":"info","ts":"2024-08-15T00:58:17.964312Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e83eb6b012f1d297","local-member-id":"e6c05fccff8d5b5b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T00:58:17.971395Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T00:58:17.990244Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T00:58:17.990516Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e6c05fccff8d5b5b","initial-advertise-peer-urls":["https://192.168.39.9:2380"],"listen-peer-urls":["https://192.168.39.9:2380"],"advertise-client-urls":["https://192.168.39.9:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.9:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T00:58:17.990554Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T00:58:17.990698Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.9:2380"}
	{"level":"info","ts":"2024-08-15T00:58:17.990719Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.9:2380"}
	{"level":"info","ts":"2024-08-15T00:58:19.677802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T00:58:19.677863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T00:58:19.677907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b received MsgPreVoteResp from e6c05fccff8d5b5b at term 2"}
	{"level":"info","ts":"2024-08-15T00:58:19.677928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T00:58:19.677936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b received MsgVoteResp from e6c05fccff8d5b5b at term 3"}
	{"level":"info","ts":"2024-08-15T00:58:19.677947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b became leader at term 3"}
	{"level":"info","ts":"2024-08-15T00:58:19.677957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e6c05fccff8d5b5b elected leader e6c05fccff8d5b5b at term 3"}
	{"level":"info","ts":"2024-08-15T00:58:19.684010Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T00:58:19.685136Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T00:58:19.683968Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e6c05fccff8d5b5b","local-member-attributes":"{Name:multinode-978269 ClientURLs:[https://192.168.39.9:2379]}","request-path":"/0/members/e6c05fccff8d5b5b/attributes","cluster-id":"e83eb6b012f1d297","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T00:58:19.685497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T00:58:19.685834Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T00:58:19.685897Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T00:58:19.686515Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.9:2379"}
	{"level":"info","ts":"2024-08-15T00:58:19.686776Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T00:58:19.688034Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:00:02 up 8 min,  0 users,  load average: 0.15, 0.35, 0.25
	Linux multinode-978269 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [48c4909f1088272f99373d9c6c535612dcbc5a9280a4248f7612cd2b871ed27d] <==
	I0815 00:59:13.410222       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:59:23.410363       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 00:59:23.410517       1 main.go:299] handling current node
	I0815 00:59:23.410563       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 00:59:23.410593       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 00:59:23.410755       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0815 00:59:23.410795       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:59:33.409654       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 00:59:33.409705       1 main.go:299] handling current node
	I0815 00:59:33.409759       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 00:59:33.409766       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 00:59:33.409984       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0815 00:59:33.410045       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:59:43.410761       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 00:59:43.410933       1 main.go:299] handling current node
	I0815 00:59:43.410953       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 00:59:43.410962       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 00:59:43.411232       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0815 00:59:43.411260       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.2.0/24] 
	I0815 00:59:53.409692       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 00:59:53.409775       1 main.go:299] handling current node
	I0815 00:59:53.409807       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 00:59:53.409816       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 00:59:53.409996       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0815 00:59:53.410034       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e] <==
	I0815 00:56:01.797788       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:56:11.799108       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0815 00:56:11.799222       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:56:11.799367       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 00:56:11.799388       1 main.go:299] handling current node
	I0815 00:56:11.799414       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 00:56:11.799431       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 00:56:21.803628       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 00:56:21.803693       1 main.go:299] handling current node
	I0815 00:56:21.803714       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 00:56:21.803724       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 00:56:21.803905       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0815 00:56:21.803925       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:56:31.796812       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 00:56:31.796886       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 00:56:31.797081       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0815 00:56:31.797102       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:56:31.797206       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 00:56:31.797226       1 main.go:299] handling current node
	I0815 00:56:41.798599       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0815 00:56:41.798687       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:56:41.798879       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 00:56:41.798889       1 main.go:299] handling current node
	I0815 00:56:41.798926       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 00:56:41.798943       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063] <==
	I0815 00:51:40.127222       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0815 00:51:40.127333       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 00:51:40.755852       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 00:51:40.798997       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 00:51:40.928890       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0815 00:51:40.937124       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.9]
	I0815 00:51:40.938956       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 00:51:40.946055       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 00:51:41.179495       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 00:51:41.812103       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 00:51:41.831591       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0815 00:51:41.842266       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 00:51:46.784806       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0815 00:51:46.840511       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0815 00:52:58.454951       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37682: use of closed network connection
	E0815 00:52:58.641955       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37704: use of closed network connection
	E0815 00:52:58.810757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37722: use of closed network connection
	E0815 00:52:58.974447       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37734: use of closed network connection
	E0815 00:52:59.140548       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37760: use of closed network connection
	E0815 00:52:59.299594       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37770: use of closed network connection
	E0815 00:52:59.571302       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37792: use of closed network connection
	E0815 00:52:59.735482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37818: use of closed network connection
	E0815 00:52:59.895388       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37824: use of closed network connection
	E0815 00:53:00.054671       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37840: use of closed network connection
	I0815 00:56:42.024307       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [e855a6e97f20c22d0ce060992e1912bff0aacd36cc3a800b3a287f2648d7556c] <==
	I0815 00:58:21.052201       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 00:58:21.052322       1 policy_source.go:224] refreshing policies
	I0815 00:58:21.055662       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 00:58:21.055712       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 00:58:21.055719       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 00:58:21.056992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 00:58:21.058407       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 00:58:21.060112       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 00:58:21.060393       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 00:58:21.060540       1 aggregator.go:171] initial CRD sync complete...
	I0815 00:58:21.060579       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 00:58:21.060601       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 00:58:21.060623       1 cache.go:39] Caches are synced for autoregister controller
	I0815 00:58:21.062891       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 00:58:21.062969       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 00:58:21.063905       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0815 00:58:21.075747       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0815 00:58:21.870747       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 00:58:23.329474       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 00:58:23.444891       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 00:58:23.458658       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 00:58:23.533627       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 00:58:23.540400       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 00:58:24.662941       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 00:58:24.712073       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9] <==
	I0815 00:54:16.398309       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:16.398887       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m02"
	I0815 00:54:17.641070       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-978269-m03\" does not exist"
	I0815 00:54:17.641789       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m02"
	I0815 00:54:17.651221       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-978269-m03" podCIDRs=["10.244.3.0/24"]
	I0815 00:54:17.651252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:17.654584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:17.659861       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:17.892896       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:18.216625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:21.391406       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:27.996419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:36.791876       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m02"
	I0815 00:54:36.792084       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:36.801088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:41.369429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:55:16.385336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 00:55:16.385680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m03"
	I0815 00:55:16.408687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 00:55:16.422510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.267378ms"
	I0815 00:55:16.422592       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.177µs"
	I0815 00:55:21.438461       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:55:21.453032       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:55:21.475476       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 00:55:31.545822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	
	
	==> kube-controller-manager [ef69db1b2a37fbdaf3f2bd7f4a9cc02236af37964017d8ec990faa80544d03a8] <==
	I0815 00:59:21.411483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 00:59:21.423745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 00:59:21.432818       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.463µs"
	I0815 00:59:21.445501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.854µs"
	I0815 00:59:24.400043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 00:59:24.529628       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.874402ms"
	I0815 00:59:24.529860       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.191µs"
	I0815 00:59:32.776477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 00:59:39.065308       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:39.085462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:39.308899       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m02"
	I0815 00:59:39.309634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:40.588253       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m02"
	I0815 00:59:40.588359       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-978269-m03\" does not exist"
	I0815 00:59:40.600753       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-978269-m03" podCIDRs=["10.244.2.0/24"]
	I0815 00:59:40.602220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:40.602278       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:40.609826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:40.798492       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:41.141587       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:44.469677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:50.839618       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:59.689879       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m02"
	I0815 00:59:59.690450       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:59.702126       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	
	
	==> kube-proxy [8c8808d72fd47a8f13ba4db52121147025d9a43d98ae4dd12cb82e5f1d4fb953] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 00:58:22.688563       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 00:58:22.700275       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.9"]
	E0815 00:58:22.700352       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:58:22.748852       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 00:58:22.748921       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 00:58:22.748950       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:58:22.750977       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:58:22.751277       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:58:22.751299       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:58:22.753068       1 config.go:197] "Starting service config controller"
	I0815 00:58:22.753106       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:58:22.753126       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:58:22.753130       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:58:22.753602       1 config.go:326] "Starting node config controller"
	I0815 00:58:22.753628       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:58:22.853439       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:58:22.853479       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:58:22.853704       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 00:51:47.717868       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 00:51:47.728685       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.9"]
	E0815 00:51:47.728828       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:51:47.755665       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 00:51:47.755693       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 00:51:47.755720       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:51:47.758896       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:51:47.759249       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:51:47.759396       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:51:47.760690       1 config.go:197] "Starting service config controller"
	I0815 00:51:47.760855       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:51:47.760903       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:51:47.760920       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:51:47.762563       1 config.go:326] "Starting node config controller"
	I0815 00:51:47.762653       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:51:47.861753       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:51:47.861867       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:51:47.863148       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff] <==
	E0815 00:51:39.230329       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 00:51:40.037252       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 00:51:40.037420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.078506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 00:51:40.078609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.092526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 00:51:40.092604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.102852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:51:40.102932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.137808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 00:51:40.137909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.157530       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:51:40.157623       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 00:51:40.160708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 00:51:40.160749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.236272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 00:51:40.236316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.441243       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:51:40.441345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.546882       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 00:51:40.547031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.561875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 00:51:40.561960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0815 00:51:43.420473       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 00:56:42.031882       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [faada8a4242393b05c2a0a978a64346c85fa05eb86647a47d7f96d44ea8591c8] <==
	I0815 00:58:18.279718       1 serving.go:386] Generated self-signed cert in-memory
	W0815 00:58:20.928397       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 00:58:20.928436       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 00:58:20.928446       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 00:58:20.928460       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 00:58:20.972116       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 00:58:20.977233       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:58:20.986835       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 00:58:20.987003       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 00:58:20.987053       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 00:58:20.987081       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 00:58:21.087478       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 00:58:26 multinode-978269 kubelet[3106]: E0815 00:58:26.860145    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683506858904655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:58:28 multinode-978269 kubelet[3106]: I0815 00:58:28.729344    3106 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 15 00:58:36 multinode-978269 kubelet[3106]: E0815 00:58:36.862734    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683516862435270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:58:36 multinode-978269 kubelet[3106]: E0815 00:58:36.862768    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683516862435270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:58:46 multinode-978269 kubelet[3106]: E0815 00:58:46.867129    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683526866879572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:58:46 multinode-978269 kubelet[3106]: E0815 00:58:46.867221    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683526866879572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:58:56 multinode-978269 kubelet[3106]: E0815 00:58:56.870082    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683536869698657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:58:56 multinode-978269 kubelet[3106]: E0815 00:58:56.870139    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683536869698657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:59:06 multinode-978269 kubelet[3106]: E0815 00:59:06.872105    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683546871762262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:59:06 multinode-978269 kubelet[3106]: E0815 00:59:06.872130    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683546871762262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:59:16 multinode-978269 kubelet[3106]: E0815 00:59:16.860971    3106 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 00:59:16 multinode-978269 kubelet[3106]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 00:59:16 multinode-978269 kubelet[3106]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 00:59:16 multinode-978269 kubelet[3106]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 00:59:16 multinode-978269 kubelet[3106]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 00:59:16 multinode-978269 kubelet[3106]: E0815 00:59:16.876893    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683556875399443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:59:16 multinode-978269 kubelet[3106]: E0815 00:59:16.876947    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683556875399443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:59:26 multinode-978269 kubelet[3106]: E0815 00:59:26.878947    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683566878575782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:59:26 multinode-978269 kubelet[3106]: E0815 00:59:26.879480    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683566878575782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:59:36 multinode-978269 kubelet[3106]: E0815 00:59:36.881939    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683576881467808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:59:36 multinode-978269 kubelet[3106]: E0815 00:59:36.882539    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683576881467808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:59:46 multinode-978269 kubelet[3106]: E0815 00:59:46.886109    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683586885140333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:59:46 multinode-978269 kubelet[3106]: E0815 00:59:46.886212    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683586885140333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:59:56 multinode-978269 kubelet[3106]: E0815 00:59:56.888204    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683596887608258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 00:59:56 multinode-978269 kubelet[3106]: E0815 00:59:56.888304    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683596887608258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:00:02.068886   50588 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19443-13088/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-978269 -n multinode-978269
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-978269 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (324.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-978269 stop: exit status 82 (2m0.459275905s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-978269-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-978269 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-978269 status: exit status 3 (18.844503097s)

                                                
                                                
-- stdout --
	multinode-978269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-978269-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:02:25.216918   51247 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	E0815 01:02:25.216951   51247 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-978269 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-978269 -n multinode-978269
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-978269 logs -n 25: (1.361422424s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp multinode-978269-m02:/home/docker/cp-test.txt                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269:/home/docker/cp-test_multinode-978269-m02_multinode-978269.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n multinode-978269 sudo cat                                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | /home/docker/cp-test_multinode-978269-m02_multinode-978269.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp multinode-978269-m02:/home/docker/cp-test.txt                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m03:/home/docker/cp-test_multinode-978269-m02_multinode-978269-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n multinode-978269-m03 sudo cat                                   | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | /home/docker/cp-test_multinode-978269-m02_multinode-978269-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp testdata/cp-test.txt                                                | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp multinode-978269-m03:/home/docker/cp-test.txt                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1195475749/001/cp-test_multinode-978269-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp multinode-978269-m03:/home/docker/cp-test.txt                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269:/home/docker/cp-test_multinode-978269-m03_multinode-978269.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n multinode-978269 sudo cat                                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | /home/docker/cp-test_multinode-978269-m03_multinode-978269.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-978269 cp multinode-978269-m03:/home/docker/cp-test.txt                       | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m02:/home/docker/cp-test_multinode-978269-m03_multinode-978269-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n                                                                 | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | multinode-978269-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-978269 ssh -n multinode-978269-m02 sudo cat                                   | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	|         | /home/docker/cp-test_multinode-978269-m03_multinode-978269-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-978269 node stop m03                                                          | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:53 UTC | 15 Aug 24 00:53 UTC |
	| node    | multinode-978269 node start                                                             | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:54 UTC | 15 Aug 24 00:54 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-978269                                                                | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:54 UTC |                     |
	| stop    | -p multinode-978269                                                                     | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:54 UTC |                     |
	| start   | -p multinode-978269                                                                     | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 00:56 UTC | 15 Aug 24 01:00 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-978269                                                                | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 01:00 UTC |                     |
	| node    | multinode-978269 node delete                                                            | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 01:00 UTC | 15 Aug 24 01:00 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-978269 stop                                                                   | multinode-978269 | jenkins | v1.33.1 | 15 Aug 24 01:00 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:56:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:56:41.101611   49465 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:56:41.101727   49465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:56:41.101734   49465 out.go:304] Setting ErrFile to fd 2...
	I0815 00:56:41.101741   49465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:56:41.101911   49465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:56:41.102440   49465 out.go:298] Setting JSON to false
	I0815 00:56:41.103373   49465 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5946,"bootTime":1723677455,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:56:41.103427   49465 start.go:139] virtualization: kvm guest
	I0815 00:56:41.105597   49465 out.go:177] * [multinode-978269] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:56:41.106932   49465 notify.go:220] Checking for updates...
	I0815 00:56:41.106962   49465 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:56:41.108281   49465 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:56:41.109617   49465 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:56:41.110844   49465 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:56:41.111997   49465 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:56:41.113349   49465 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:56:41.114753   49465 config.go:182] Loaded profile config "multinode-978269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:56:41.114849   49465 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:56:41.115300   49465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:56:41.115372   49465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:56:41.131065   49465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44603
	I0815 00:56:41.131469   49465 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:56:41.132040   49465 main.go:141] libmachine: Using API Version  1
	I0815 00:56:41.132069   49465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:56:41.132503   49465 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:56:41.132705   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:56:41.168608   49465 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 00:56:41.170000   49465 start.go:297] selected driver: kvm2
	I0815 00:56:41.170024   49465 start.go:901] validating driver "kvm2" against &{Name:multinode-978269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-978269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.147 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:56:41.170163   49465 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:56:41.170565   49465 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:56:41.170670   49465 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 00:56:41.185662   49465 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 00:56:41.186346   49465 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 00:56:41.186415   49465 cni.go:84] Creating CNI manager for ""
	I0815 00:56:41.186427   49465 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 00:56:41.186496   49465 start.go:340] cluster config:
	{Name:multinode-978269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-978269 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.147 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:56:41.186636   49465 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:56:41.188412   49465 out.go:177] * Starting "multinode-978269" primary control-plane node in "multinode-978269" cluster
	I0815 00:56:41.189743   49465 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:56:41.189788   49465 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:56:41.189799   49465 cache.go:56] Caching tarball of preloaded images
	I0815 00:56:41.189882   49465 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 00:56:41.189894   49465 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 00:56:41.190041   49465 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/config.json ...
	I0815 00:56:41.190275   49465 start.go:360] acquireMachinesLock for multinode-978269: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 00:56:41.190352   49465 start.go:364] duration metric: took 32.543µs to acquireMachinesLock for "multinode-978269"
	I0815 00:56:41.190369   49465 start.go:96] Skipping create...Using existing machine configuration
	I0815 00:56:41.190380   49465 fix.go:54] fixHost starting: 
	I0815 00:56:41.190650   49465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:56:41.190687   49465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:56:41.205366   49465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
	I0815 00:56:41.205835   49465 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:56:41.206291   49465 main.go:141] libmachine: Using API Version  1
	I0815 00:56:41.206334   49465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:56:41.206669   49465 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:56:41.206861   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:56:41.206999   49465 main.go:141] libmachine: (multinode-978269) Calling .GetState
	I0815 00:56:41.208931   49465 fix.go:112] recreateIfNeeded on multinode-978269: state=Running err=<nil>
	W0815 00:56:41.208968   49465 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 00:56:41.212032   49465 out.go:177] * Updating the running kvm2 "multinode-978269" VM ...
	I0815 00:56:41.213321   49465 machine.go:94] provisionDockerMachine start ...
	I0815 00:56:41.213372   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:56:41.213579   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:56:41.216261   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.216816   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.216846   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.217046   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:56:41.217227   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.217380   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.217496   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:56:41.217643   49465 main.go:141] libmachine: Using SSH client type: native
	I0815 00:56:41.217860   49465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0815 00:56:41.217873   49465 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 00:56:41.325965   49465 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-978269
	
	I0815 00:56:41.326004   49465 main.go:141] libmachine: (multinode-978269) Calling .GetMachineName
	I0815 00:56:41.326311   49465 buildroot.go:166] provisioning hostname "multinode-978269"
	I0815 00:56:41.326352   49465 main.go:141] libmachine: (multinode-978269) Calling .GetMachineName
	I0815 00:56:41.326529   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:56:41.329619   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.329962   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.329986   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.330176   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:56:41.330341   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.330538   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.330745   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:56:41.330947   49465 main.go:141] libmachine: Using SSH client type: native
	I0815 00:56:41.331134   49465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0815 00:56:41.331149   49465 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-978269 && echo "multinode-978269" | sudo tee /etc/hostname
	I0815 00:56:41.446894   49465 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-978269
	
	I0815 00:56:41.446921   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:56:41.449797   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.450235   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.450264   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.450475   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:56:41.450664   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.450796   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.451025   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:56:41.451178   49465 main.go:141] libmachine: Using SSH client type: native
	I0815 00:56:41.451357   49465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0815 00:56:41.451373   49465 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-978269' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-978269/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-978269' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 00:56:41.557435   49465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 00:56:41.557463   49465 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 00:56:41.557493   49465 buildroot.go:174] setting up certificates
	I0815 00:56:41.557502   49465 provision.go:84] configureAuth start
	I0815 00:56:41.557513   49465 main.go:141] libmachine: (multinode-978269) Calling .GetMachineName
	I0815 00:56:41.557800   49465 main.go:141] libmachine: (multinode-978269) Calling .GetIP
	I0815 00:56:41.560511   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.560914   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.560949   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.561086   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:56:41.563056   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.563381   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.563420   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.563529   49465 provision.go:143] copyHostCerts
	I0815 00:56:41.563582   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:56:41.563614   49465 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 00:56:41.563632   49465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 00:56:41.563707   49465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 00:56:41.563816   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:56:41.563839   49465 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 00:56:41.563844   49465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 00:56:41.563871   49465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 00:56:41.563933   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:56:41.563953   49465 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 00:56:41.563958   49465 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 00:56:41.563981   49465 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 00:56:41.564046   49465 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.multinode-978269 san=[127.0.0.1 192.168.39.9 localhost minikube multinode-978269]
	I0815 00:56:41.742696   49465 provision.go:177] copyRemoteCerts
	I0815 00:56:41.742761   49465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 00:56:41.742782   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:56:41.746032   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.746464   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.746491   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.746713   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:56:41.746909   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.747106   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:56:41.747246   49465 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/multinode-978269/id_rsa Username:docker}
	I0815 00:56:41.830850   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 00:56:41.830920   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0815 00:56:41.860573   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 00:56:41.860676   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 00:56:41.885044   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 00:56:41.885113   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 00:56:41.907449   49465 provision.go:87] duration metric: took 349.933653ms to configureAuth
	I0815 00:56:41.907477   49465 buildroot.go:189] setting minikube options for container-runtime
	I0815 00:56:41.907722   49465 config.go:182] Loaded profile config "multinode-978269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:56:41.907798   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:56:41.910554   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.910948   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:56:41.910976   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:56:41.911093   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:56:41.911277   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.911431   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:56:41.911600   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:56:41.911760   49465 main.go:141] libmachine: Using SSH client type: native
	I0815 00:56:41.911935   49465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0815 00:56:41.911948   49465 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 00:58:12.642699   49465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 00:58:12.642747   49465 machine.go:97] duration metric: took 1m31.429410222s to provisionDockerMachine
	I0815 00:58:12.642765   49465 start.go:293] postStartSetup for "multinode-978269" (driver="kvm2")
	I0815 00:58:12.642788   49465 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 00:58:12.642812   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:58:12.643184   49465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 00:58:12.643209   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:58:12.646807   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.647400   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:58:12.647442   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.647516   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:58:12.647715   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:58:12.647855   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:58:12.647993   49465 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/multinode-978269/id_rsa Username:docker}
	I0815 00:58:12.731510   49465 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 00:58:12.735601   49465 command_runner.go:130] > NAME=Buildroot
	I0815 00:58:12.735624   49465 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0815 00:58:12.735629   49465 command_runner.go:130] > ID=buildroot
	I0815 00:58:12.735634   49465 command_runner.go:130] > VERSION_ID=2023.02.9
	I0815 00:58:12.735641   49465 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0815 00:58:12.735678   49465 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 00:58:12.735697   49465 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 00:58:12.735776   49465 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 00:58:12.735874   49465 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 00:58:12.735888   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /etc/ssl/certs/202792.pem
	I0815 00:58:12.735987   49465 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 00:58:12.745028   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:58:12.767190   49465 start.go:296] duration metric: took 124.411832ms for postStartSetup
	I0815 00:58:12.767265   49465 fix.go:56] duration metric: took 1m31.576887329s for fixHost
	I0815 00:58:12.767292   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:58:12.769869   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.770254   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:58:12.770284   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.770452   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:58:12.770653   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:58:12.770816   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:58:12.770957   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:58:12.771122   49465 main.go:141] libmachine: Using SSH client type: native
	I0815 00:58:12.771301   49465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0815 00:58:12.771312   49465 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 00:58:12.873111   49465 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723683492.846810194
	
	I0815 00:58:12.873135   49465 fix.go:216] guest clock: 1723683492.846810194
	I0815 00:58:12.873158   49465 fix.go:229] Guest: 2024-08-15 00:58:12.846810194 +0000 UTC Remote: 2024-08-15 00:58:12.767274555 +0000 UTC m=+91.700030506 (delta=79.535639ms)
	I0815 00:58:12.873198   49465 fix.go:200] guest clock delta is within tolerance: 79.535639ms
	I0815 00:58:12.873206   49465 start.go:83] releasing machines lock for "multinode-978269", held for 1m31.682841428s
	I0815 00:58:12.873234   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:58:12.873474   49465 main.go:141] libmachine: (multinode-978269) Calling .GetIP
	I0815 00:58:12.876502   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.876844   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:58:12.876868   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.877048   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:58:12.877520   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:58:12.877725   49465 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:58:12.877819   49465 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 00:58:12.877854   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:58:12.877969   49465 ssh_runner.go:195] Run: cat /version.json
	I0815 00:58:12.877993   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:58:12.880443   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.880621   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.880859   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:58:12.880885   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.881053   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:58:12.881074   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:58:12.881095   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:12.881196   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:58:12.881279   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:58:12.881338   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:58:12.881566   49465 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/multinode-978269/id_rsa Username:docker}
	I0815 00:58:12.881601   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:58:12.881746   49465 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:58:12.881876   49465 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/multinode-978269/id_rsa Username:docker}
	I0815 00:58:12.957062   49465 command_runner.go:130] > {"iso_version": "v1.33.1-1723650137-19443", "kicbase_version": "v0.0.44-1723567951-19429", "minikube_version": "v1.33.1", "commit": "0de88034feeac7cdc6e3fa82af59b9e46ac52b3e"}
	I0815 00:58:12.957382   49465 ssh_runner.go:195] Run: systemctl --version
	I0815 00:58:12.994916   49465 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0815 00:58:12.994964   49465 command_runner.go:130] > systemd 252 (252)
	I0815 00:58:12.994984   49465 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0815 00:58:12.995062   49465 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 00:58:13.148312   49465 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 00:58:13.156190   49465 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0815 00:58:13.156254   49465 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 00:58:13.156334   49465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 00:58:13.165179   49465 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 00:58:13.165202   49465 start.go:495] detecting cgroup driver to use...
	I0815 00:58:13.165275   49465 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 00:58:13.181568   49465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 00:58:13.195299   49465 docker.go:217] disabling cri-docker service (if available) ...
	I0815 00:58:13.195355   49465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 00:58:13.208919   49465 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 00:58:13.222255   49465 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 00:58:13.358438   49465 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 00:58:13.498305   49465 docker.go:233] disabling docker service ...
	I0815 00:58:13.498385   49465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 00:58:13.513976   49465 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 00:58:13.526116   49465 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 00:58:13.662582   49465 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 00:58:13.808791   49465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 00:58:13.822004   49465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 00:58:13.841519   49465 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0815 00:58:13.841564   49465 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 00:58:13.841609   49465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.851437   49465 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 00:58:13.851509   49465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.861425   49465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.871028   49465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.880784   49465 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 00:58:13.912878   49465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.928498   49465 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.966878   49465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 00:58:13.984521   49465 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 00:58:14.000622   49465 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0815 00:58:14.000747   49465 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 00:58:14.011860   49465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:58:14.205579   49465 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 00:58:14.476619   49465 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 00:58:14.476711   49465 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 00:58:14.481349   49465 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0815 00:58:14.481375   49465 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0815 00:58:14.481383   49465 command_runner.go:130] > Device: 0,22	Inode: 1417        Links: 1
	I0815 00:58:14.481394   49465 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0815 00:58:14.481402   49465 command_runner.go:130] > Access: 2024-08-15 00:58:14.309430924 +0000
	I0815 00:58:14.481412   49465 command_runner.go:130] > Modify: 2024-08-15 00:58:14.309430924 +0000
	I0815 00:58:14.481422   49465 command_runner.go:130] > Change: 2024-08-15 00:58:14.309430924 +0000
	I0815 00:58:14.481430   49465 command_runner.go:130] >  Birth: -
	I0815 00:58:14.481447   49465 start.go:563] Will wait 60s for crictl version
	I0815 00:58:14.481491   49465 ssh_runner.go:195] Run: which crictl
	I0815 00:58:14.484938   49465 command_runner.go:130] > /usr/bin/crictl
	I0815 00:58:14.484997   49465 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 00:58:14.520674   49465 command_runner.go:130] > Version:  0.1.0
	I0815 00:58:14.520698   49465 command_runner.go:130] > RuntimeName:  cri-o
	I0815 00:58:14.520704   49465 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0815 00:58:14.520781   49465 command_runner.go:130] > RuntimeApiVersion:  v1
	I0815 00:58:14.522010   49465 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 00:58:14.522085   49465 ssh_runner.go:195] Run: crio --version
	I0815 00:58:14.548957   49465 command_runner.go:130] > crio version 1.29.1
	I0815 00:58:14.548987   49465 command_runner.go:130] > Version:        1.29.1
	I0815 00:58:14.548995   49465 command_runner.go:130] > GitCommit:      unknown
	I0815 00:58:14.549002   49465 command_runner.go:130] > GitCommitDate:  unknown
	I0815 00:58:14.549007   49465 command_runner.go:130] > GitTreeState:   clean
	I0815 00:58:14.549013   49465 command_runner.go:130] > BuildDate:      2024-08-14T19:54:05Z
	I0815 00:58:14.549017   49465 command_runner.go:130] > GoVersion:      go1.21.6
	I0815 00:58:14.549021   49465 command_runner.go:130] > Compiler:       gc
	I0815 00:58:14.549025   49465 command_runner.go:130] > Platform:       linux/amd64
	I0815 00:58:14.549029   49465 command_runner.go:130] > Linkmode:       dynamic
	I0815 00:58:14.549038   49465 command_runner.go:130] > BuildTags:      
	I0815 00:58:14.549050   49465 command_runner.go:130] >   containers_image_ostree_stub
	I0815 00:58:14.549055   49465 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0815 00:58:14.549059   49465 command_runner.go:130] >   btrfs_noversion
	I0815 00:58:14.549066   49465 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0815 00:58:14.549075   49465 command_runner.go:130] >   libdm_no_deferred_remove
	I0815 00:58:14.549084   49465 command_runner.go:130] >   seccomp
	I0815 00:58:14.549091   49465 command_runner.go:130] > LDFlags:          unknown
	I0815 00:58:14.549097   49465 command_runner.go:130] > SeccompEnabled:   true
	I0815 00:58:14.549104   49465 command_runner.go:130] > AppArmorEnabled:  false
	I0815 00:58:14.549183   49465 ssh_runner.go:195] Run: crio --version
	I0815 00:58:14.575442   49465 command_runner.go:130] > crio version 1.29.1
	I0815 00:58:14.575464   49465 command_runner.go:130] > Version:        1.29.1
	I0815 00:58:14.575470   49465 command_runner.go:130] > GitCommit:      unknown
	I0815 00:58:14.575474   49465 command_runner.go:130] > GitCommitDate:  unknown
	I0815 00:58:14.575478   49465 command_runner.go:130] > GitTreeState:   clean
	I0815 00:58:14.575484   49465 command_runner.go:130] > BuildDate:      2024-08-14T19:54:05Z
	I0815 00:58:14.575495   49465 command_runner.go:130] > GoVersion:      go1.21.6
	I0815 00:58:14.575499   49465 command_runner.go:130] > Compiler:       gc
	I0815 00:58:14.575505   49465 command_runner.go:130] > Platform:       linux/amd64
	I0815 00:58:14.575511   49465 command_runner.go:130] > Linkmode:       dynamic
	I0815 00:58:14.575519   49465 command_runner.go:130] > BuildTags:      
	I0815 00:58:14.575530   49465 command_runner.go:130] >   containers_image_ostree_stub
	I0815 00:58:14.575538   49465 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0815 00:58:14.575544   49465 command_runner.go:130] >   btrfs_noversion
	I0815 00:58:14.575551   49465 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0815 00:58:14.575561   49465 command_runner.go:130] >   libdm_no_deferred_remove
	I0815 00:58:14.575567   49465 command_runner.go:130] >   seccomp
	I0815 00:58:14.575576   49465 command_runner.go:130] > LDFlags:          unknown
	I0815 00:58:14.575582   49465 command_runner.go:130] > SeccompEnabled:   true
	I0815 00:58:14.575591   49465 command_runner.go:130] > AppArmorEnabled:  false
	I0815 00:58:14.577666   49465 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 00:58:14.578896   49465 main.go:141] libmachine: (multinode-978269) Calling .GetIP
	I0815 00:58:14.581441   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:14.581745   49465 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:58:14.581767   49465 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:58:14.581961   49465 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 00:58:14.585872   49465 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0815 00:58:14.585971   49465 kubeadm.go:883] updating cluster {Name:multinode-978269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-978269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.147 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 00:58:14.586117   49465 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:58:14.586177   49465 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:58:14.627352   49465 command_runner.go:130] > {
	I0815 00:58:14.627369   49465 command_runner.go:130] >   "images": [
	I0815 00:58:14.627373   49465 command_runner.go:130] >     {
	I0815 00:58:14.627380   49465 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0815 00:58:14.627385   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.627391   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0815 00:58:14.627395   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627400   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.627412   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0815 00:58:14.627425   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0815 00:58:14.627431   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627447   49465 command_runner.go:130] >       "size": "87165492",
	I0815 00:58:14.627454   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.627458   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.627463   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.627467   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.627473   49465 command_runner.go:130] >     },
	I0815 00:58:14.627477   49465 command_runner.go:130] >     {
	I0815 00:58:14.627483   49465 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0815 00:58:14.627492   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.627506   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0815 00:58:14.627515   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627522   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.627536   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0815 00:58:14.627551   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0815 00:58:14.627557   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627561   49465 command_runner.go:130] >       "size": "87190579",
	I0815 00:58:14.627567   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.627578   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.627587   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.627597   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.627605   49465 command_runner.go:130] >     },
	I0815 00:58:14.627611   49465 command_runner.go:130] >     {
	I0815 00:58:14.627623   49465 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0815 00:58:14.627633   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.627643   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0815 00:58:14.627651   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627655   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.627669   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0815 00:58:14.627684   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0815 00:58:14.627692   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627700   49465 command_runner.go:130] >       "size": "1363676",
	I0815 00:58:14.627708   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.627715   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.627725   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.627733   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.627739   49465 command_runner.go:130] >     },
	I0815 00:58:14.627749   49465 command_runner.go:130] >     {
	I0815 00:58:14.627762   49465 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0815 00:58:14.627772   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.627782   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0815 00:58:14.627788   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627797   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.627811   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0815 00:58:14.627830   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0815 00:58:14.627838   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627848   49465 command_runner.go:130] >       "size": "31470524",
	I0815 00:58:14.627858   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.627867   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.627873   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.627882   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.627890   49465 command_runner.go:130] >     },
	I0815 00:58:14.627898   49465 command_runner.go:130] >     {
	I0815 00:58:14.627907   49465 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0815 00:58:14.627913   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.627921   49465 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0815 00:58:14.627929   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627939   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.627953   49465 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0815 00:58:14.627967   49465 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0815 00:58:14.627976   49465 command_runner.go:130] >       ],
	I0815 00:58:14.627985   49465 command_runner.go:130] >       "size": "61245718",
	I0815 00:58:14.627993   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.628000   49465 command_runner.go:130] >       "username": "nonroot",
	I0815 00:58:14.628006   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.628015   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.628024   49465 command_runner.go:130] >     },
	I0815 00:58:14.628032   49465 command_runner.go:130] >     {
	I0815 00:58:14.628042   49465 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0815 00:58:14.628051   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.628060   49465 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0815 00:58:14.628069   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628077   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.628093   49465 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0815 00:58:14.628103   49465 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0815 00:58:14.628109   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628115   49465 command_runner.go:130] >       "size": "149009664",
	I0815 00:58:14.628121   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.628128   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.628132   49465 command_runner.go:130] >       },
	I0815 00:58:14.628138   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.628145   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.628152   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.628158   49465 command_runner.go:130] >     },
	I0815 00:58:14.628165   49465 command_runner.go:130] >     {
	I0815 00:58:14.628171   49465 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0815 00:58:14.628179   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.628190   49465 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0815 00:58:14.628199   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628209   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.628223   49465 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0815 00:58:14.628237   49465 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0815 00:58:14.628245   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628253   49465 command_runner.go:130] >       "size": "95233506",
	I0815 00:58:14.628256   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.628268   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.628276   49465 command_runner.go:130] >       },
	I0815 00:58:14.628286   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.628295   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.628304   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.628312   49465 command_runner.go:130] >     },
	I0815 00:58:14.628318   49465 command_runner.go:130] >     {
	I0815 00:58:14.628331   49465 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0815 00:58:14.628339   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.628344   49465 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0815 00:58:14.628352   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628362   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.628393   49465 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0815 00:58:14.628409   49465 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0815 00:58:14.628419   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628427   49465 command_runner.go:130] >       "size": "89437512",
	I0815 00:58:14.628431   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.628440   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.628448   49465 command_runner.go:130] >       },
	I0815 00:58:14.628457   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.628466   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.628473   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.628478   49465 command_runner.go:130] >     },
	I0815 00:58:14.628484   49465 command_runner.go:130] >     {
	I0815 00:58:14.628493   49465 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0815 00:58:14.628499   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.628507   49465 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0815 00:58:14.628511   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628514   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.628524   49465 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0815 00:58:14.628536   49465 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0815 00:58:14.628542   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628548   49465 command_runner.go:130] >       "size": "92728217",
	I0815 00:58:14.628555   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.628564   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.628570   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.628578   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.628585   49465 command_runner.go:130] >     },
	I0815 00:58:14.628591   49465 command_runner.go:130] >     {
	I0815 00:58:14.628600   49465 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0815 00:58:14.628608   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.628619   49465 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0815 00:58:14.628627   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628634   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.628646   49465 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0815 00:58:14.628676   49465 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0815 00:58:14.628685   49465 command_runner.go:130] >       ],
	I0815 00:58:14.628691   49465 command_runner.go:130] >       "size": "68420936",
	I0815 00:58:14.628700   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.628710   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.628724   49465 command_runner.go:130] >       },
	I0815 00:58:14.628981   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.629023   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.629030   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.629036   49465 command_runner.go:130] >     },
	I0815 00:58:14.629043   49465 command_runner.go:130] >     {
	I0815 00:58:14.629063   49465 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0815 00:58:14.629070   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.629077   49465 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0815 00:58:14.629104   49465 command_runner.go:130] >       ],
	I0815 00:58:14.629110   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.629134   49465 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0815 00:58:14.629145   49465 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0815 00:58:14.629151   49465 command_runner.go:130] >       ],
	I0815 00:58:14.629158   49465 command_runner.go:130] >       "size": "742080",
	I0815 00:58:14.629169   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.629176   49465 command_runner.go:130] >         "value": "65535"
	I0815 00:58:14.629181   49465 command_runner.go:130] >       },
	I0815 00:58:14.629187   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.629193   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.629199   49465 command_runner.go:130] >       "pinned": true
	I0815 00:58:14.629209   49465 command_runner.go:130] >     }
	I0815 00:58:14.629214   49465 command_runner.go:130] >   ]
	I0815 00:58:14.629219   49465 command_runner.go:130] > }
	I0815 00:58:14.629499   49465 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:58:14.629511   49465 crio.go:433] Images already preloaded, skipping extraction
	I0815 00:58:14.629577   49465 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 00:58:14.663119   49465 command_runner.go:130] > {
	I0815 00:58:14.663141   49465 command_runner.go:130] >   "images": [
	I0815 00:58:14.663145   49465 command_runner.go:130] >     {
	I0815 00:58:14.663154   49465 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0815 00:58:14.663160   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663167   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0815 00:58:14.663171   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663174   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663183   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0815 00:58:14.663204   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0815 00:58:14.663213   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663219   49465 command_runner.go:130] >       "size": "87165492",
	I0815 00:58:14.663229   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.663235   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663246   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663255   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663260   49465 command_runner.go:130] >     },
	I0815 00:58:14.663267   49465 command_runner.go:130] >     {
	I0815 00:58:14.663277   49465 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0815 00:58:14.663285   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663294   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0815 00:58:14.663299   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663305   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663316   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0815 00:58:14.663338   49465 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0815 00:58:14.663345   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663349   49465 command_runner.go:130] >       "size": "87190579",
	I0815 00:58:14.663353   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.663363   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663367   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663372   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663380   49465 command_runner.go:130] >     },
	I0815 00:58:14.663386   49465 command_runner.go:130] >     {
	I0815 00:58:14.663397   49465 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0815 00:58:14.663409   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663418   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0815 00:58:14.663422   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663426   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663434   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0815 00:58:14.663443   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0815 00:58:14.663448   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663452   49465 command_runner.go:130] >       "size": "1363676",
	I0815 00:58:14.663458   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.663463   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663474   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663480   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663484   49465 command_runner.go:130] >     },
	I0815 00:58:14.663489   49465 command_runner.go:130] >     {
	I0815 00:58:14.663495   49465 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0815 00:58:14.663501   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663506   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0815 00:58:14.663512   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663516   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663525   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0815 00:58:14.663538   49465 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0815 00:58:14.663544   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663565   49465 command_runner.go:130] >       "size": "31470524",
	I0815 00:58:14.663574   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.663579   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663585   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663589   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663593   49465 command_runner.go:130] >     },
	I0815 00:58:14.663597   49465 command_runner.go:130] >     {
	I0815 00:58:14.663605   49465 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0815 00:58:14.663610   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663617   49465 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0815 00:58:14.663621   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663626   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663634   49465 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0815 00:58:14.663642   49465 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0815 00:58:14.663646   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663652   49465 command_runner.go:130] >       "size": "61245718",
	I0815 00:58:14.663656   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.663663   49465 command_runner.go:130] >       "username": "nonroot",
	I0815 00:58:14.663667   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663673   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663677   49465 command_runner.go:130] >     },
	I0815 00:58:14.663682   49465 command_runner.go:130] >     {
	I0815 00:58:14.663689   49465 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0815 00:58:14.663695   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663700   49465 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0815 00:58:14.663706   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663710   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663723   49465 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0815 00:58:14.663732   49465 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0815 00:58:14.663735   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663739   49465 command_runner.go:130] >       "size": "149009664",
	I0815 00:58:14.663745   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.663749   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.663757   49465 command_runner.go:130] >       },
	I0815 00:58:14.663764   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663768   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663774   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663778   49465 command_runner.go:130] >     },
	I0815 00:58:14.663784   49465 command_runner.go:130] >     {
	I0815 00:58:14.663790   49465 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0815 00:58:14.663796   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663800   49465 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0815 00:58:14.663806   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663810   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663819   49465 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0815 00:58:14.663826   49465 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0815 00:58:14.663832   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663836   49465 command_runner.go:130] >       "size": "95233506",
	I0815 00:58:14.663842   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.663846   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.663852   49465 command_runner.go:130] >       },
	I0815 00:58:14.663855   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663860   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663864   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663869   49465 command_runner.go:130] >     },
	I0815 00:58:14.663872   49465 command_runner.go:130] >     {
	I0815 00:58:14.663882   49465 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0815 00:58:14.663887   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663893   49465 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0815 00:58:14.663898   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663902   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.663918   49465 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0815 00:58:14.663928   49465 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0815 00:58:14.663933   49465 command_runner.go:130] >       ],
	I0815 00:58:14.663937   49465 command_runner.go:130] >       "size": "89437512",
	I0815 00:58:14.663943   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.663947   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.663953   49465 command_runner.go:130] >       },
	I0815 00:58:14.663957   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.663963   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.663968   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.663973   49465 command_runner.go:130] >     },
	I0815 00:58:14.663976   49465 command_runner.go:130] >     {
	I0815 00:58:14.663984   49465 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0815 00:58:14.663988   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.663993   49465 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0815 00:58:14.663997   49465 command_runner.go:130] >       ],
	I0815 00:58:14.664002   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.664010   49465 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0815 00:58:14.664021   49465 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0815 00:58:14.664027   49465 command_runner.go:130] >       ],
	I0815 00:58:14.664031   49465 command_runner.go:130] >       "size": "92728217",
	I0815 00:58:14.664036   49465 command_runner.go:130] >       "uid": null,
	I0815 00:58:14.664040   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.664046   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.664051   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.664056   49465 command_runner.go:130] >     },
	I0815 00:58:14.664059   49465 command_runner.go:130] >     {
	I0815 00:58:14.664067   49465 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0815 00:58:14.664071   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.664077   49465 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0815 00:58:14.664083   49465 command_runner.go:130] >       ],
	I0815 00:58:14.664087   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.664094   49465 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0815 00:58:14.664102   49465 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0815 00:58:14.664105   49465 command_runner.go:130] >       ],
	I0815 00:58:14.664109   49465 command_runner.go:130] >       "size": "68420936",
	I0815 00:58:14.664115   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.664119   49465 command_runner.go:130] >         "value": "0"
	I0815 00:58:14.664122   49465 command_runner.go:130] >       },
	I0815 00:58:14.664126   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.664130   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.664134   49465 command_runner.go:130] >       "pinned": false
	I0815 00:58:14.664137   49465 command_runner.go:130] >     },
	I0815 00:58:14.664140   49465 command_runner.go:130] >     {
	I0815 00:58:14.664146   49465 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0815 00:58:14.664152   49465 command_runner.go:130] >       "repoTags": [
	I0815 00:58:14.664156   49465 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0815 00:58:14.664160   49465 command_runner.go:130] >       ],
	I0815 00:58:14.664164   49465 command_runner.go:130] >       "repoDigests": [
	I0815 00:58:14.664171   49465 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0815 00:58:14.664179   49465 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0815 00:58:14.664183   49465 command_runner.go:130] >       ],
	I0815 00:58:14.664188   49465 command_runner.go:130] >       "size": "742080",
	I0815 00:58:14.664191   49465 command_runner.go:130] >       "uid": {
	I0815 00:58:14.664197   49465 command_runner.go:130] >         "value": "65535"
	I0815 00:58:14.664200   49465 command_runner.go:130] >       },
	I0815 00:58:14.664204   49465 command_runner.go:130] >       "username": "",
	I0815 00:58:14.664210   49465 command_runner.go:130] >       "spec": null,
	I0815 00:58:14.664214   49465 command_runner.go:130] >       "pinned": true
	I0815 00:58:14.664220   49465 command_runner.go:130] >     }
	I0815 00:58:14.664223   49465 command_runner.go:130] >   ]
	I0815 00:58:14.664228   49465 command_runner.go:130] > }
	I0815 00:58:14.664355   49465 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 00:58:14.664365   49465 cache_images.go:84] Images are preloaded, skipping loading
	I0815 00:58:14.664380   49465 kubeadm.go:934] updating node { 192.168.39.9 8443 v1.31.0 crio true true} ...
	I0815 00:58:14.664529   49465 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-978269 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-978269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 00:58:14.664600   49465 ssh_runner.go:195] Run: crio config
	I0815 00:58:14.696916   49465 command_runner.go:130] ! time="2024-08-15 00:58:14.670648091Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0815 00:58:14.702709   49465 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0815 00:58:14.708286   49465 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0815 00:58:14.708313   49465 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0815 00:58:14.708320   49465 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0815 00:58:14.708323   49465 command_runner.go:130] > #
	I0815 00:58:14.708331   49465 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0815 00:58:14.708337   49465 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0815 00:58:14.708343   49465 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0815 00:58:14.708351   49465 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0815 00:58:14.708356   49465 command_runner.go:130] > # reload'.
	I0815 00:58:14.708364   49465 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0815 00:58:14.708373   49465 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0815 00:58:14.708383   49465 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0815 00:58:14.708392   49465 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0815 00:58:14.708399   49465 command_runner.go:130] > [crio]
	I0815 00:58:14.708406   49465 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0815 00:58:14.708412   49465 command_runner.go:130] > # containers images, in this directory.
	I0815 00:58:14.708417   49465 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0815 00:58:14.708428   49465 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0815 00:58:14.708436   49465 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0815 00:58:14.708443   49465 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0815 00:58:14.708448   49465 command_runner.go:130] > # imagestore = ""
	I0815 00:58:14.708454   49465 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0815 00:58:14.708463   49465 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0815 00:58:14.708471   49465 command_runner.go:130] > storage_driver = "overlay"
	I0815 00:58:14.708480   49465 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0815 00:58:14.708490   49465 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0815 00:58:14.708499   49465 command_runner.go:130] > storage_option = [
	I0815 00:58:14.708508   49465 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0815 00:58:14.708511   49465 command_runner.go:130] > ]
	I0815 00:58:14.708517   49465 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0815 00:58:14.708525   49465 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0815 00:58:14.708529   49465 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0815 00:58:14.708535   49465 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0815 00:58:14.708547   49465 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0815 00:58:14.708558   49465 command_runner.go:130] > # always happen on a node reboot
	I0815 00:58:14.708568   49465 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0815 00:58:14.708588   49465 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0815 00:58:14.708601   49465 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0815 00:58:14.708608   49465 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0815 00:58:14.708616   49465 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0815 00:58:14.708623   49465 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0815 00:58:14.708632   49465 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0815 00:58:14.708638   49465 command_runner.go:130] > # internal_wipe = true
	I0815 00:58:14.708646   49465 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0815 00:58:14.708667   49465 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0815 00:58:14.708677   49465 command_runner.go:130] > # internal_repair = false
	I0815 00:58:14.708686   49465 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0815 00:58:14.708708   49465 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0815 00:58:14.708719   49465 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0815 00:58:14.708730   49465 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0815 00:58:14.708742   49465 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0815 00:58:14.708749   49465 command_runner.go:130] > [crio.api]
	I0815 00:58:14.708754   49465 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0815 00:58:14.708763   49465 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0815 00:58:14.708774   49465 command_runner.go:130] > # IP address on which the stream server will listen.
	I0815 00:58:14.708784   49465 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0815 00:58:14.708795   49465 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0815 00:58:14.708806   49465 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0815 00:58:14.708815   49465 command_runner.go:130] > # stream_port = "0"
	I0815 00:58:14.708827   49465 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0815 00:58:14.708836   49465 command_runner.go:130] > # stream_enable_tls = false
	I0815 00:58:14.708848   49465 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0815 00:58:14.708856   49465 command_runner.go:130] > # stream_idle_timeout = ""
	I0815 00:58:14.708868   49465 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0815 00:58:14.708881   49465 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0815 00:58:14.708890   49465 command_runner.go:130] > # minutes.
	I0815 00:58:14.708897   49465 command_runner.go:130] > # stream_tls_cert = ""
	I0815 00:58:14.708909   49465 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0815 00:58:14.708921   49465 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0815 00:58:14.708931   49465 command_runner.go:130] > # stream_tls_key = ""
	I0815 00:58:14.708941   49465 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0815 00:58:14.708951   49465 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0815 00:58:14.708988   49465 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0815 00:58:14.708999   49465 command_runner.go:130] > # stream_tls_ca = ""
	I0815 00:58:14.709010   49465 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0815 00:58:14.709025   49465 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0815 00:58:14.709039   49465 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0815 00:58:14.709049   49465 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0815 00:58:14.709061   49465 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0815 00:58:14.709073   49465 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0815 00:58:14.709080   49465 command_runner.go:130] > [crio.runtime]
	I0815 00:58:14.709087   49465 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0815 00:58:14.709099   49465 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0815 00:58:14.709109   49465 command_runner.go:130] > # "nofile=1024:2048"
	I0815 00:58:14.709119   49465 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0815 00:58:14.709128   49465 command_runner.go:130] > # default_ulimits = [
	I0815 00:58:14.709136   49465 command_runner.go:130] > # ]
	I0815 00:58:14.709145   49465 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0815 00:58:14.709154   49465 command_runner.go:130] > # no_pivot = false
	I0815 00:58:14.709164   49465 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0815 00:58:14.709175   49465 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0815 00:58:14.709183   49465 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0815 00:58:14.709191   49465 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0815 00:58:14.709201   49465 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0815 00:58:14.709215   49465 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0815 00:58:14.709226   49465 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0815 00:58:14.709235   49465 command_runner.go:130] > # Cgroup setting for conmon
	I0815 00:58:14.709248   49465 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0815 00:58:14.709258   49465 command_runner.go:130] > conmon_cgroup = "pod"
	I0815 00:58:14.709270   49465 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0815 00:58:14.709279   49465 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0815 00:58:14.709310   49465 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0815 00:58:14.709321   49465 command_runner.go:130] > conmon_env = [
	I0815 00:58:14.709330   49465 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0815 00:58:14.709339   49465 command_runner.go:130] > ]
	I0815 00:58:14.709347   49465 command_runner.go:130] > # Additional environment variables to set for all the
	I0815 00:58:14.709357   49465 command_runner.go:130] > # containers. These are overridden if set in the
	I0815 00:58:14.709368   49465 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0815 00:58:14.709384   49465 command_runner.go:130] > # default_env = [
	I0815 00:58:14.709391   49465 command_runner.go:130] > # ]
	I0815 00:58:14.709397   49465 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0815 00:58:14.709411   49465 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0815 00:58:14.709421   49465 command_runner.go:130] > # selinux = false
	I0815 00:58:14.709431   49465 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0815 00:58:14.709443   49465 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0815 00:58:14.709455   49465 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0815 00:58:14.709464   49465 command_runner.go:130] > # seccomp_profile = ""
	I0815 00:58:14.709476   49465 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0815 00:58:14.709487   49465 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0815 00:58:14.709496   49465 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0815 00:58:14.709502   49465 command_runner.go:130] > # which might increase security.
	I0815 00:58:14.709513   49465 command_runner.go:130] > # This option is currently deprecated,
	I0815 00:58:14.709525   49465 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0815 00:58:14.709534   49465 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0815 00:58:14.709544   49465 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0815 00:58:14.709557   49465 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0815 00:58:14.709568   49465 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0815 00:58:14.709581   49465 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0815 00:58:14.709590   49465 command_runner.go:130] > # This option supports live configuration reload.
	I0815 00:58:14.709597   49465 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0815 00:58:14.709606   49465 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0815 00:58:14.709616   49465 command_runner.go:130] > # the cgroup blockio controller.
	I0815 00:58:14.709626   49465 command_runner.go:130] > # blockio_config_file = ""
	I0815 00:58:14.709637   49465 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0815 00:58:14.709646   49465 command_runner.go:130] > # blockio parameters.
	I0815 00:58:14.709656   49465 command_runner.go:130] > # blockio_reload = false
	I0815 00:58:14.709668   49465 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0815 00:58:14.709676   49465 command_runner.go:130] > # irqbalance daemon.
	I0815 00:58:14.709682   49465 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0815 00:58:14.709697   49465 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0815 00:58:14.709711   49465 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0815 00:58:14.709725   49465 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0815 00:58:14.709737   49465 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0815 00:58:14.709750   49465 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0815 00:58:14.709768   49465 command_runner.go:130] > # This option supports live configuration reload.
	I0815 00:58:14.709776   49465 command_runner.go:130] > # rdt_config_file = ""
	I0815 00:58:14.709781   49465 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0815 00:58:14.709789   49465 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0815 00:58:14.709831   49465 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0815 00:58:14.709842   49465 command_runner.go:130] > # separate_pull_cgroup = ""
	I0815 00:58:14.709855   49465 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0815 00:58:14.709864   49465 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0815 00:58:14.709871   49465 command_runner.go:130] > # will be added.
	I0815 00:58:14.709878   49465 command_runner.go:130] > # default_capabilities = [
	I0815 00:58:14.709886   49465 command_runner.go:130] > # 	"CHOWN",
	I0815 00:58:14.709893   49465 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0815 00:58:14.709907   49465 command_runner.go:130] > # 	"FSETID",
	I0815 00:58:14.709916   49465 command_runner.go:130] > # 	"FOWNER",
	I0815 00:58:14.709922   49465 command_runner.go:130] > # 	"SETGID",
	I0815 00:58:14.709929   49465 command_runner.go:130] > # 	"SETUID",
	I0815 00:58:14.709938   49465 command_runner.go:130] > # 	"SETPCAP",
	I0815 00:58:14.709944   49465 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0815 00:58:14.709952   49465 command_runner.go:130] > # 	"KILL",
	I0815 00:58:14.709957   49465 command_runner.go:130] > # ]
	I0815 00:58:14.709967   49465 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0815 00:58:14.709984   49465 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0815 00:58:14.709995   49465 command_runner.go:130] > # add_inheritable_capabilities = false
	I0815 00:58:14.710025   49465 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0815 00:58:14.710041   49465 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0815 00:58:14.710050   49465 command_runner.go:130] > default_sysctls = [
	I0815 00:58:14.710060   49465 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0815 00:58:14.710066   49465 command_runner.go:130] > ]
	I0815 00:58:14.710070   49465 command_runner.go:130] > # List of devices on the host that a
	I0815 00:58:14.710082   49465 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0815 00:58:14.710093   49465 command_runner.go:130] > # allowed_devices = [
	I0815 00:58:14.710099   49465 command_runner.go:130] > # 	"/dev/fuse",
	I0815 00:58:14.710108   49465 command_runner.go:130] > # ]
	I0815 00:58:14.710118   49465 command_runner.go:130] > # List of additional devices. specified as
	I0815 00:58:14.710131   49465 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0815 00:58:14.710142   49465 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0815 00:58:14.710163   49465 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0815 00:58:14.710171   49465 command_runner.go:130] > # additional_devices = [
	I0815 00:58:14.710176   49465 command_runner.go:130] > # ]
	I0815 00:58:14.710187   49465 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0815 00:58:14.710197   49465 command_runner.go:130] > # cdi_spec_dirs = [
	I0815 00:58:14.710206   49465 command_runner.go:130] > # 	"/etc/cdi",
	I0815 00:58:14.710214   49465 command_runner.go:130] > # 	"/var/run/cdi",
	I0815 00:58:14.710219   49465 command_runner.go:130] > # ]
	I0815 00:58:14.710232   49465 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0815 00:58:14.710244   49465 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0815 00:58:14.710251   49465 command_runner.go:130] > # Defaults to false.
	I0815 00:58:14.710256   49465 command_runner.go:130] > # device_ownership_from_security_context = false
	I0815 00:58:14.710268   49465 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0815 00:58:14.710281   49465 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0815 00:58:14.710287   49465 command_runner.go:130] > # hooks_dir = [
	I0815 00:58:14.710301   49465 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0815 00:58:14.710310   49465 command_runner.go:130] > # ]
	I0815 00:58:14.710319   49465 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0815 00:58:14.710332   49465 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0815 00:58:14.710342   49465 command_runner.go:130] > # its default mounts from the following two files:
	I0815 00:58:14.710348   49465 command_runner.go:130] > #
	I0815 00:58:14.710354   49465 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0815 00:58:14.710366   49465 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0815 00:58:14.710378   49465 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0815 00:58:14.710385   49465 command_runner.go:130] > #
	I0815 00:58:14.710397   49465 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0815 00:58:14.710410   49465 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0815 00:58:14.710423   49465 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0815 00:58:14.710433   49465 command_runner.go:130] > #      only add mounts it finds in this file.
	I0815 00:58:14.710441   49465 command_runner.go:130] > #
	I0815 00:58:14.710447   49465 command_runner.go:130] > # default_mounts_file = ""
	I0815 00:58:14.710456   49465 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0815 00:58:14.710470   49465 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0815 00:58:14.710479   49465 command_runner.go:130] > pids_limit = 1024
	I0815 00:58:14.710492   49465 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0815 00:58:14.710504   49465 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0815 00:58:14.710525   49465 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0815 00:58:14.710537   49465 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0815 00:58:14.710543   49465 command_runner.go:130] > # log_size_max = -1
	I0815 00:58:14.710556   49465 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0815 00:58:14.710569   49465 command_runner.go:130] > # log_to_journald = false
	I0815 00:58:14.710581   49465 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0815 00:58:14.710592   49465 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0815 00:58:14.710604   49465 command_runner.go:130] > # Path to directory for container attach sockets.
	I0815 00:58:14.710615   49465 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0815 00:58:14.710627   49465 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0815 00:58:14.710634   49465 command_runner.go:130] > # bind_mount_prefix = ""
	I0815 00:58:14.710641   49465 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0815 00:58:14.710649   49465 command_runner.go:130] > # read_only = false
	I0815 00:58:14.710662   49465 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0815 00:58:14.710675   49465 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0815 00:58:14.710685   49465 command_runner.go:130] > # live configuration reload.
	I0815 00:58:14.710695   49465 command_runner.go:130] > # log_level = "info"
	I0815 00:58:14.710704   49465 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0815 00:58:14.710714   49465 command_runner.go:130] > # This option supports live configuration reload.
	I0815 00:58:14.710721   49465 command_runner.go:130] > # log_filter = ""
	I0815 00:58:14.710727   49465 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0815 00:58:14.710751   49465 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0815 00:58:14.710762   49465 command_runner.go:130] > # separated by comma.
	I0815 00:58:14.710774   49465 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 00:58:14.710783   49465 command_runner.go:130] > # uid_mappings = ""
	I0815 00:58:14.710796   49465 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0815 00:58:14.710808   49465 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0815 00:58:14.710817   49465 command_runner.go:130] > # separated by comma.
	I0815 00:58:14.710831   49465 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 00:58:14.710837   49465 command_runner.go:130] > # gid_mappings = ""
	I0815 00:58:14.710846   49465 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0815 00:58:14.710859   49465 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0815 00:58:14.710872   49465 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0815 00:58:14.710887   49465 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 00:58:14.710897   49465 command_runner.go:130] > # minimum_mappable_uid = -1
	I0815 00:58:14.710909   49465 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0815 00:58:14.710927   49465 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0815 00:58:14.710938   49465 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0815 00:58:14.710953   49465 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 00:58:14.710966   49465 command_runner.go:130] > # minimum_mappable_gid = -1
	I0815 00:58:14.710978   49465 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0815 00:58:14.710990   49465 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0815 00:58:14.711002   49465 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0815 00:58:14.711010   49465 command_runner.go:130] > # ctr_stop_timeout = 30
	I0815 00:58:14.711016   49465 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0815 00:58:14.711027   49465 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0815 00:58:14.711038   49465 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0815 00:58:14.711047   49465 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0815 00:58:14.711056   49465 command_runner.go:130] > drop_infra_ctr = false
	I0815 00:58:14.711069   49465 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0815 00:58:14.711081   49465 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0815 00:58:14.711094   49465 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0815 00:58:14.711102   49465 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0815 00:58:14.711110   49465 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0815 00:58:14.711121   49465 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0815 00:58:14.711133   49465 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0815 00:58:14.711142   49465 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0815 00:58:14.711152   49465 command_runner.go:130] > # shared_cpuset = ""
	I0815 00:58:14.711161   49465 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0815 00:58:14.711170   49465 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0815 00:58:14.711178   49465 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0815 00:58:14.711191   49465 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0815 00:58:14.711199   49465 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0815 00:58:14.711205   49465 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0815 00:58:14.711216   49465 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0815 00:58:14.711226   49465 command_runner.go:130] > # enable_criu_support = false
	I0815 00:58:14.711238   49465 command_runner.go:130] > # Enable/disable the generation of the container,
	I0815 00:58:14.711251   49465 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0815 00:58:14.711261   49465 command_runner.go:130] > # enable_pod_events = false
	I0815 00:58:14.711273   49465 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0815 00:58:14.711285   49465 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0815 00:58:14.711291   49465 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0815 00:58:14.711310   49465 command_runner.go:130] > # default_runtime = "runc"
	I0815 00:58:14.711322   49465 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0815 00:58:14.711334   49465 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0815 00:58:14.711351   49465 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0815 00:58:14.711367   49465 command_runner.go:130] > # creation as a file is not desired either.
	I0815 00:58:14.711382   49465 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0815 00:58:14.711391   49465 command_runner.go:130] > # the hostname is being managed dynamically.
	I0815 00:58:14.711395   49465 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0815 00:58:14.711403   49465 command_runner.go:130] > # ]
	I0815 00:58:14.711412   49465 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0815 00:58:14.711425   49465 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0815 00:58:14.711438   49465 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0815 00:58:14.711449   49465 command_runner.go:130] > # Each entry in the table should follow the format:
	I0815 00:58:14.711456   49465 command_runner.go:130] > #
	I0815 00:58:14.711464   49465 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0815 00:58:14.711475   49465 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0815 00:58:14.711519   49465 command_runner.go:130] > # runtime_type = "oci"
	I0815 00:58:14.711531   49465 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0815 00:58:14.711538   49465 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0815 00:58:14.711548   49465 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0815 00:58:14.711558   49465 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0815 00:58:14.711567   49465 command_runner.go:130] > # monitor_env = []
	I0815 00:58:14.711576   49465 command_runner.go:130] > # privileged_without_host_devices = false
	I0815 00:58:14.711584   49465 command_runner.go:130] > # allowed_annotations = []
	I0815 00:58:14.711596   49465 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0815 00:58:14.711604   49465 command_runner.go:130] > # Where:
	I0815 00:58:14.711612   49465 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0815 00:58:14.711625   49465 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0815 00:58:14.711638   49465 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0815 00:58:14.711651   49465 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0815 00:58:14.711659   49465 command_runner.go:130] > #   in $PATH.
	I0815 00:58:14.711679   49465 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0815 00:58:14.711691   49465 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0815 00:58:14.711703   49465 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0815 00:58:14.711710   49465 command_runner.go:130] > #   state.
	I0815 00:58:14.711720   49465 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0815 00:58:14.711734   49465 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0815 00:58:14.711748   49465 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0815 00:58:14.711760   49465 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0815 00:58:14.711772   49465 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0815 00:58:14.711785   49465 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0815 00:58:14.711798   49465 command_runner.go:130] > #   The currently recognized values are:
	I0815 00:58:14.711807   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0815 00:58:14.711822   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0815 00:58:14.711834   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0815 00:58:14.711848   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0815 00:58:14.711862   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0815 00:58:14.711874   49465 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0815 00:58:14.711887   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0815 00:58:14.711899   49465 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0815 00:58:14.711907   49465 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0815 00:58:14.711917   49465 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0815 00:58:14.711927   49465 command_runner.go:130] > #   deprecated option "conmon".
	I0815 00:58:14.711938   49465 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0815 00:58:14.711949   49465 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0815 00:58:14.711963   49465 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0815 00:58:14.711973   49465 command_runner.go:130] > #   should be moved to the container's cgroup
	I0815 00:58:14.711991   49465 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0815 00:58:14.711999   49465 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0815 00:58:14.712007   49465 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0815 00:58:14.712017   49465 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0815 00:58:14.712026   49465 command_runner.go:130] > #
	I0815 00:58:14.712033   49465 command_runner.go:130] > # Using the seccomp notifier feature:
	I0815 00:58:14.712042   49465 command_runner.go:130] > #
	I0815 00:58:14.712052   49465 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0815 00:58:14.712065   49465 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0815 00:58:14.712073   49465 command_runner.go:130] > #
	I0815 00:58:14.712082   49465 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0815 00:58:14.712094   49465 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0815 00:58:14.712100   49465 command_runner.go:130] > #
	I0815 00:58:14.712108   49465 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0815 00:58:14.712117   49465 command_runner.go:130] > # feature.
	I0815 00:58:14.712126   49465 command_runner.go:130] > #
	I0815 00:58:14.712138   49465 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0815 00:58:14.712150   49465 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0815 00:58:14.712162   49465 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0815 00:58:14.712178   49465 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0815 00:58:14.712187   49465 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0815 00:58:14.712194   49465 command_runner.go:130] > #
	I0815 00:58:14.712204   49465 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0815 00:58:14.712217   49465 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0815 00:58:14.712226   49465 command_runner.go:130] > #
	I0815 00:58:14.712236   49465 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0815 00:58:14.712247   49465 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0815 00:58:14.712255   49465 command_runner.go:130] > #
	I0815 00:58:14.712269   49465 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0815 00:58:14.712281   49465 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0815 00:58:14.712288   49465 command_runner.go:130] > # limitation.
	I0815 00:58:14.712299   49465 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0815 00:58:14.712309   49465 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0815 00:58:14.712316   49465 command_runner.go:130] > runtime_type = "oci"
	I0815 00:58:14.712326   49465 command_runner.go:130] > runtime_root = "/run/runc"
	I0815 00:58:14.712335   49465 command_runner.go:130] > runtime_config_path = ""
	I0815 00:58:14.712344   49465 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0815 00:58:14.712353   49465 command_runner.go:130] > monitor_cgroup = "pod"
	I0815 00:58:14.712360   49465 command_runner.go:130] > monitor_exec_cgroup = ""
	I0815 00:58:14.712369   49465 command_runner.go:130] > monitor_env = [
	I0815 00:58:14.712378   49465 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0815 00:58:14.712384   49465 command_runner.go:130] > ]
	I0815 00:58:14.712392   49465 command_runner.go:130] > privileged_without_host_devices = false
	I0815 00:58:14.712405   49465 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0815 00:58:14.712417   49465 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0815 00:58:14.712430   49465 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0815 00:58:14.712443   49465 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0815 00:58:14.712459   49465 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0815 00:58:14.712468   49465 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0815 00:58:14.712479   49465 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0815 00:58:14.712494   49465 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0815 00:58:14.712507   49465 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0815 00:58:14.712519   49465 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0815 00:58:14.712525   49465 command_runner.go:130] > # Example:
	I0815 00:58:14.712533   49465 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0815 00:58:14.712544   49465 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0815 00:58:14.712552   49465 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0815 00:58:14.712560   49465 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0815 00:58:14.712563   49465 command_runner.go:130] > # cpuset = 0
	I0815 00:58:14.712568   49465 command_runner.go:130] > # cpushares = "0-1"
	I0815 00:58:14.712573   49465 command_runner.go:130] > # Where:
	I0815 00:58:14.712580   49465 command_runner.go:130] > # The workload name is workload-type.
	I0815 00:58:14.712591   49465 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0815 00:58:14.712600   49465 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0815 00:58:14.712608   49465 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0815 00:58:14.712620   49465 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0815 00:58:14.712629   49465 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0815 00:58:14.712637   49465 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0815 00:58:14.712645   49465 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0815 00:58:14.712650   49465 command_runner.go:130] > # Default value is set to true
	I0815 00:58:14.712669   49465 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0815 00:58:14.712679   49465 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0815 00:58:14.712686   49465 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0815 00:58:14.712693   49465 command_runner.go:130] > # Default value is set to 'false'
	I0815 00:58:14.712700   49465 command_runner.go:130] > # disable_hostport_mapping = false
	I0815 00:58:14.712710   49465 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0815 00:58:14.712715   49465 command_runner.go:130] > #
	I0815 00:58:14.712724   49465 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0815 00:58:14.712735   49465 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0815 00:58:14.712746   49465 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0815 00:58:14.712759   49465 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0815 00:58:14.712775   49465 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0815 00:58:14.712784   49465 command_runner.go:130] > [crio.image]
	I0815 00:58:14.712794   49465 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0815 00:58:14.712804   49465 command_runner.go:130] > # default_transport = "docker://"
	I0815 00:58:14.712816   49465 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0815 00:58:14.712825   49465 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0815 00:58:14.712836   49465 command_runner.go:130] > # global_auth_file = ""
	I0815 00:58:14.712847   49465 command_runner.go:130] > # The image used to instantiate infra containers.
	I0815 00:58:14.712855   49465 command_runner.go:130] > # This option supports live configuration reload.
	I0815 00:58:14.712867   49465 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0815 00:58:14.712880   49465 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0815 00:58:14.712891   49465 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0815 00:58:14.712902   49465 command_runner.go:130] > # This option supports live configuration reload.
	I0815 00:58:14.712916   49465 command_runner.go:130] > # pause_image_auth_file = ""
	I0815 00:58:14.712925   49465 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0815 00:58:14.712937   49465 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0815 00:58:14.712950   49465 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0815 00:58:14.712961   49465 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0815 00:58:14.712971   49465 command_runner.go:130] > # pause_command = "/pause"
	I0815 00:58:14.712984   49465 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0815 00:58:14.712995   49465 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0815 00:58:14.713006   49465 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0815 00:58:14.713019   49465 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0815 00:58:14.713030   49465 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0815 00:58:14.713043   49465 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0815 00:58:14.713053   49465 command_runner.go:130] > # pinned_images = [
	I0815 00:58:14.713061   49465 command_runner.go:130] > # ]
	I0815 00:58:14.713071   49465 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0815 00:58:14.713083   49465 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0815 00:58:14.713095   49465 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0815 00:58:14.713105   49465 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0815 00:58:14.713115   49465 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0815 00:58:14.713124   49465 command_runner.go:130] > # signature_policy = ""
	I0815 00:58:14.713137   49465 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0815 00:58:14.713150   49465 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0815 00:58:14.713163   49465 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0815 00:58:14.713174   49465 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0815 00:58:14.713185   49465 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0815 00:58:14.713192   49465 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0815 00:58:14.713202   49465 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0815 00:58:14.713214   49465 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0815 00:58:14.713225   49465 command_runner.go:130] > # changing them here.
	I0815 00:58:14.713239   49465 command_runner.go:130] > # insecure_registries = [
	I0815 00:58:14.713247   49465 command_runner.go:130] > # ]
	I0815 00:58:14.713258   49465 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0815 00:58:14.713268   49465 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0815 00:58:14.713277   49465 command_runner.go:130] > # image_volumes = "mkdir"
	I0815 00:58:14.713285   49465 command_runner.go:130] > # Temporary directory to use for storing big files
	I0815 00:58:14.713290   49465 command_runner.go:130] > # big_files_temporary_dir = ""
	I0815 00:58:14.713310   49465 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0815 00:58:14.713320   49465 command_runner.go:130] > # CNI plugins.
	I0815 00:58:14.713326   49465 command_runner.go:130] > [crio.network]
	I0815 00:58:14.713339   49465 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0815 00:58:14.713350   49465 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0815 00:58:14.713359   49465 command_runner.go:130] > # cni_default_network = ""
	I0815 00:58:14.713371   49465 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0815 00:58:14.713381   49465 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0815 00:58:14.713390   49465 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0815 00:58:14.713395   49465 command_runner.go:130] > # plugin_dirs = [
	I0815 00:58:14.713400   49465 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0815 00:58:14.713409   49465 command_runner.go:130] > # ]
	I0815 00:58:14.713422   49465 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0815 00:58:14.713428   49465 command_runner.go:130] > [crio.metrics]
	I0815 00:58:14.713438   49465 command_runner.go:130] > # Globally enable or disable metrics support.
	I0815 00:58:14.713447   49465 command_runner.go:130] > enable_metrics = true
	I0815 00:58:14.713457   49465 command_runner.go:130] > # Specify enabled metrics collectors.
	I0815 00:58:14.713467   49465 command_runner.go:130] > # Per default all metrics are enabled.
	I0815 00:58:14.713478   49465 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0815 00:58:14.713488   49465 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0815 00:58:14.713496   49465 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0815 00:58:14.713503   49465 command_runner.go:130] > # metrics_collectors = [
	I0815 00:58:14.713512   49465 command_runner.go:130] > # 	"operations",
	I0815 00:58:14.713523   49465 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0815 00:58:14.713533   49465 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0815 00:58:14.713540   49465 command_runner.go:130] > # 	"operations_errors",
	I0815 00:58:14.713554   49465 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0815 00:58:14.713563   49465 command_runner.go:130] > # 	"image_pulls_by_name",
	I0815 00:58:14.713571   49465 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0815 00:58:14.713579   49465 command_runner.go:130] > # 	"image_pulls_failures",
	I0815 00:58:14.713583   49465 command_runner.go:130] > # 	"image_pulls_successes",
	I0815 00:58:14.713594   49465 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0815 00:58:14.713604   49465 command_runner.go:130] > # 	"image_layer_reuse",
	I0815 00:58:14.713612   49465 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0815 00:58:14.713622   49465 command_runner.go:130] > # 	"containers_oom_total",
	I0815 00:58:14.713631   49465 command_runner.go:130] > # 	"containers_oom",
	I0815 00:58:14.713639   49465 command_runner.go:130] > # 	"processes_defunct",
	I0815 00:58:14.713648   49465 command_runner.go:130] > # 	"operations_total",
	I0815 00:58:14.713658   49465 command_runner.go:130] > # 	"operations_latency_seconds",
	I0815 00:58:14.713666   49465 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0815 00:58:14.713673   49465 command_runner.go:130] > # 	"operations_errors_total",
	I0815 00:58:14.713678   49465 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0815 00:58:14.713687   49465 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0815 00:58:14.713697   49465 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0815 00:58:14.713707   49465 command_runner.go:130] > # 	"image_pulls_success_total",
	I0815 00:58:14.713720   49465 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0815 00:58:14.713729   49465 command_runner.go:130] > # 	"containers_oom_count_total",
	I0815 00:58:14.713739   49465 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0815 00:58:14.713749   49465 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0815 00:58:14.713756   49465 command_runner.go:130] > # ]
	I0815 00:58:14.713762   49465 command_runner.go:130] > # The port on which the metrics server will listen.
	I0815 00:58:14.713767   49465 command_runner.go:130] > # metrics_port = 9090
	I0815 00:58:14.713776   49465 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0815 00:58:14.713786   49465 command_runner.go:130] > # metrics_socket = ""
	I0815 00:58:14.713794   49465 command_runner.go:130] > # The certificate for the secure metrics server.
	I0815 00:58:14.713807   49465 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0815 00:58:14.713820   49465 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0815 00:58:14.713831   49465 command_runner.go:130] > # certificate on any modification event.
	I0815 00:58:14.713840   49465 command_runner.go:130] > # metrics_cert = ""
	I0815 00:58:14.713849   49465 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0815 00:58:14.713857   49465 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0815 00:58:14.713862   49465 command_runner.go:130] > # metrics_key = ""
	I0815 00:58:14.713873   49465 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0815 00:58:14.713883   49465 command_runner.go:130] > [crio.tracing]
	I0815 00:58:14.713892   49465 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0815 00:58:14.713902   49465 command_runner.go:130] > # enable_tracing = false
	I0815 00:58:14.713914   49465 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0815 00:58:14.713924   49465 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0815 00:58:14.713938   49465 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0815 00:58:14.713948   49465 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0815 00:58:14.713955   49465 command_runner.go:130] > # CRI-O NRI configuration.
	I0815 00:58:14.713959   49465 command_runner.go:130] > [crio.nri]
	I0815 00:58:14.713968   49465 command_runner.go:130] > # Globally enable or disable NRI.
	I0815 00:58:14.713975   49465 command_runner.go:130] > # enable_nri = false
	I0815 00:58:14.713985   49465 command_runner.go:130] > # NRI socket to listen on.
	I0815 00:58:14.713996   49465 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0815 00:58:14.714006   49465 command_runner.go:130] > # NRI plugin directory to use.
	I0815 00:58:14.714016   49465 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0815 00:58:14.714027   49465 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0815 00:58:14.714036   49465 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0815 00:58:14.714044   49465 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0815 00:58:14.714050   49465 command_runner.go:130] > # nri_disable_connections = false
	I0815 00:58:14.714061   49465 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0815 00:58:14.714072   49465 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0815 00:58:14.714083   49465 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0815 00:58:14.714092   49465 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0815 00:58:14.714103   49465 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0815 00:58:14.714111   49465 command_runner.go:130] > [crio.stats]
	I0815 00:58:14.714122   49465 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0815 00:58:14.714130   49465 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0815 00:58:14.714136   49465 command_runner.go:130] > # stats_collection_period = 0
	I0815 00:58:14.714257   49465 cni.go:84] Creating CNI manager for ""
	I0815 00:58:14.714267   49465 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 00:58:14.714278   49465 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 00:58:14.714317   49465 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.9 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-978269 NodeName:multinode-978269 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 00:58:14.714482   49465 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-978269"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 00:58:14.714561   49465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 00:58:14.724922   49465 command_runner.go:130] > kubeadm
	I0815 00:58:14.724937   49465 command_runner.go:130] > kubectl
	I0815 00:58:14.724943   49465 command_runner.go:130] > kubelet
	I0815 00:58:14.724965   49465 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 00:58:14.725022   49465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 00:58:14.734374   49465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0815 00:58:14.750683   49465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 00:58:14.765486   49465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0815 00:58:14.780055   49465 ssh_runner.go:195] Run: grep 192.168.39.9	control-plane.minikube.internal$ /etc/hosts
	I0815 00:58:14.783384   49465 command_runner.go:130] > 192.168.39.9	control-plane.minikube.internal
	I0815 00:58:14.783508   49465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 00:58:14.924473   49465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 00:58:14.939451   49465 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269 for IP: 192.168.39.9
	I0815 00:58:14.939483   49465 certs.go:194] generating shared ca certs ...
	I0815 00:58:14.939507   49465 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 00:58:14.939681   49465 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 00:58:14.939718   49465 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 00:58:14.939727   49465 certs.go:256] generating profile certs ...
	I0815 00:58:14.939857   49465 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/client.key
	I0815 00:58:14.939920   49465 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/apiserver.key.c466d5b3
	I0815 00:58:14.939953   49465 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/proxy-client.key
	I0815 00:58:14.939962   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 00:58:14.939974   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 00:58:14.939988   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 00:58:14.939997   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 00:58:14.940009   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 00:58:14.940022   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 00:58:14.940034   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 00:58:14.940044   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 00:58:14.940100   49465 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 00:58:14.940126   49465 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 00:58:14.940135   49465 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 00:58:14.940154   49465 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 00:58:14.940176   49465 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 00:58:14.940197   49465 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 00:58:14.940233   49465 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 00:58:14.940259   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem -> /usr/share/ca-certificates/20279.pem
	I0815 00:58:14.940272   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> /usr/share/ca-certificates/202792.pem
	I0815 00:58:14.940282   49465 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:58:14.940889   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 00:58:14.964211   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 00:58:14.985865   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 00:58:15.007315   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 00:58:15.029270   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 00:58:15.051111   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 00:58:15.072419   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 00:58:15.093667   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/multinode-978269/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 00:58:15.114558   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 00:58:15.135570   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 00:58:15.157027   49465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 00:58:15.178377   49465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 00:58:15.193831   49465 ssh_runner.go:195] Run: openssl version
	I0815 00:58:15.199414   49465 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0815 00:58:15.199487   49465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 00:58:15.209877   49465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 00:58:15.213935   49465 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 00:58:15.213990   49465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 00:58:15.214037   49465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 00:58:15.219265   49465 command_runner.go:130] > 3ec20f2e
	I0815 00:58:15.219354   49465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 00:58:15.228613   49465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 00:58:15.238733   49465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:58:15.242711   49465 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:58:15.242745   49465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:58:15.242791   49465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 00:58:15.248478   49465 command_runner.go:130] > b5213941
	I0815 00:58:15.248534   49465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 00:58:15.257272   49465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 00:58:15.267252   49465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 00:58:15.271451   49465 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 00:58:15.271474   49465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 00:58:15.271515   49465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 00:58:15.276599   49465 command_runner.go:130] > 51391683
	I0815 00:58:15.276765   49465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 00:58:15.285604   49465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:58:15.289717   49465 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 00:58:15.289734   49465 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0815 00:58:15.289740   49465 command_runner.go:130] > Device: 253,1	Inode: 3150358     Links: 1
	I0815 00:58:15.289746   49465 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0815 00:58:15.289754   49465 command_runner.go:130] > Access: 2024-08-15 00:51:32.829729222 +0000
	I0815 00:58:15.289759   49465 command_runner.go:130] > Modify: 2024-08-15 00:51:32.829729222 +0000
	I0815 00:58:15.289765   49465 command_runner.go:130] > Change: 2024-08-15 00:51:32.829729222 +0000
	I0815 00:58:15.289773   49465 command_runner.go:130] >  Birth: 2024-08-15 00:51:32.829729222 +0000
	I0815 00:58:15.289834   49465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 00:58:15.294826   49465 command_runner.go:130] > Certificate will not expire
	I0815 00:58:15.294984   49465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 00:58:15.299900   49465 command_runner.go:130] > Certificate will not expire
	I0815 00:58:15.300072   49465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 00:58:15.304969   49465 command_runner.go:130] > Certificate will not expire
	I0815 00:58:15.305134   49465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 00:58:15.310062   49465 command_runner.go:130] > Certificate will not expire
	I0815 00:58:15.310108   49465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 00:58:15.314921   49465 command_runner.go:130] > Certificate will not expire
	I0815 00:58:15.315104   49465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 00:58:15.320056   49465 command_runner.go:130] > Certificate will not expire
	I0815 00:58:15.320125   49465 kubeadm.go:392] StartCluster: {Name:multinode-978269 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-978269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.147 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:58:15.320256   49465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 00:58:15.320295   49465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 00:58:15.360144   49465 command_runner.go:130] > 8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051
	I0815 00:58:15.360169   49465 command_runner.go:130] > 340a1428c9abff824f0bb4ecd2c9711c6cc39828885cfa0cd4e220850cc17e80
	I0815 00:58:15.360179   49465 command_runner.go:130] > 22e4139a30c48f640d8e98f1ba952283af88959631c1f2342cea281b3bde60ad
	I0815 00:58:15.360190   49465 command_runner.go:130] > 8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e
	I0815 00:58:15.360202   49465 command_runner.go:130] > d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d
	I0815 00:58:15.360213   49465 command_runner.go:130] > a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff
	I0815 00:58:15.360224   49465 command_runner.go:130] > 5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063
	I0815 00:58:15.360239   49465 command_runner.go:130] > 1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9
	I0815 00:58:15.360250   49465 command_runner.go:130] > 60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b
	I0815 00:58:15.360280   49465 cri.go:89] found id: "8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051"
	I0815 00:58:15.360291   49465 cri.go:89] found id: "340a1428c9abff824f0bb4ecd2c9711c6cc39828885cfa0cd4e220850cc17e80"
	I0815 00:58:15.360298   49465 cri.go:89] found id: "22e4139a30c48f640d8e98f1ba952283af88959631c1f2342cea281b3bde60ad"
	I0815 00:58:15.360306   49465 cri.go:89] found id: "8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e"
	I0815 00:58:15.360310   49465 cri.go:89] found id: "d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d"
	I0815 00:58:15.360318   49465 cri.go:89] found id: "a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff"
	I0815 00:58:15.360322   49465 cri.go:89] found id: "5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063"
	I0815 00:58:15.360329   49465 cri.go:89] found id: "1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9"
	I0815 00:58:15.360334   49465 cri.go:89] found id: "60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b"
	I0815 00:58:15.360341   49465 cri.go:89] found id: ""
	I0815 00:58:15.360385   49465 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.813506616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683745813481611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd051cc2-175d-4bff-89f3-75d158c8bcd5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.813951944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a50b7c16-c45a-4191-999c-0e24ae93afd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.814020727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a50b7c16-c45a-4191-999c-0e24ae93afd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.814452059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:120eb7a5322b4daf2ee1a0cfb9b63388cdfc4e469a5db10b84f10cf47c8d5254,PodSandboxId:16ad6434f062d6d50485494821593edf7dbf293221c7f278c3042dcd0388648b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723683536059232592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c4909f1088272f99373d9c6c535612dcbc5a9280a4248f7612cd2b871ed27d,PodSandboxId:aca1c8c059dc6fcc588bbf8a022ec41988aa33965b94e573f32106f448f433ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723683502548612310,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcf1beb1bc92cebc59ec3fcd8e8188a7715e034929c6e140a15f8f1607b21eb,PodSandboxId:84d0e2e7ed71f2d746c72da4542331af3b3d3f6c8a6650a6004d930f3b58eb02,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723683502453590261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8808d72fd47a8f13ba4db52121147025d9a43d98ae4dd12cb82e5f1d4fb953,PodSandboxId:4bc92df2419c1400d0fdebc5b09f113e30dc6c167b9c1af0641b31262f2a0f8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723683502417139789,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee3bcd285e9df7e4bb10e968ec4c925393549948ecec928932893c721b7ee5e,PodSandboxId:e28fe438bc0c258d027aa48b1707ad1ae448518e5164c0f95e295121dea83d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723683502318836723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d77133fc7b4e846c266aa900382bffd31131ad078c4c09a793ed9d21fd1f8cfc,PodSandboxId:80cf7a8ac2d8c2b926374fc91fc186f68b48b07c0a66d7444367b8f8909680f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723683497518771726,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faada8a4242393b05c2a0a978a64346c85fa05eb86647a47d7f96d44ea8591c8,PodSandboxId:38c963d11d6ca2eb4aeb24b07e5a3e82900ec2d0f28e1c9972d9aad17e0648fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723683497512495149,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef69db1b2a37fbdaf3f2bd7f4a9cc02236af37964017d8ec990faa80544d03a8,PodSandboxId:0138fd75175495a00c5ac5d424db95d085871855ec0538bba7b7cc89c8d7e788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723683497477648607,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e855a6e97f20c22d0ce060992e1912bff0aacd36cc3a800b3a287f2648d7556c,PodSandboxId:07367e8e3488ffbf080d4e38bab34939266a0f944a4ee6404505d6d244ea1942,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723683497423524388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051,PodSandboxId:a1d0190337c10341a25c9d5d3159cbc924fe66561dd8810c1b8b820f1822419d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723683494085462147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:800515c9ab5a8951cb047cfe97b369811eb85f1d6608c5e5a3abd71d37f2827f,PodSandboxId:6b4d4b0ac1a32ec18d3987e1ad8ca4f1ff7ee235af55ffedd49905c34e1f0113,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723683177240946025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e4139a30c48f640d8e98f1ba952283af88959631c1f2342cea281b3bde60ad,PodSandboxId:e349553d11879763183387850a348109f53da17bd7a3bb4566e73e1d4c6f5a3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723683122490434213,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e,PodSandboxId:a93c061b3b0563c6f9077505cb45eaa972c012f6ef7373c32a29f5bbe2fb8377,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723683110885743702,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d,PodSandboxId:2eafab9d119accedfaed33a30f78d3401d2714e84fbb17f08afa2a3cd5743e79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723683107484957880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff,PodSandboxId:0ffa578248454e7c2ca3dd67bf1d25e222119114f8dabc823007271919e12aa0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723683096690245393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063,PodSandboxId:a1e7e4c32d43de14e34587e1e59366bc206a64252ed8430822be9c131a9dba8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723683096687056390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92
aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b,PodSandboxId:a58ecc268ed541798a0064360e5f94dad6cfb94d0187de75659f35d14015daee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723683096594819224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9,PodSandboxId:a5e805766ccb471132d7e0afe8d3b80c5f55f54cfd921f8eedfd4c685cc90f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723683096637980891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a50b7c16-c45a-4191-999c-0e24ae93afd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.858357341Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c87936e-597f-4d48-b314-9419a6f31553 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.858460529Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c87936e-597f-4d48-b314-9419a6f31553 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.860236942Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c09b5d8-228b-47fc-bf48-34cc4925f37a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.860800945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683745860773986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c09b5d8-228b-47fc-bf48-34cc4925f37a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.861439727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6de520f3-4b12-4f90-a7da-5d2550803f24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.861509482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6de520f3-4b12-4f90-a7da-5d2550803f24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.861864785Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:120eb7a5322b4daf2ee1a0cfb9b63388cdfc4e469a5db10b84f10cf47c8d5254,PodSandboxId:16ad6434f062d6d50485494821593edf7dbf293221c7f278c3042dcd0388648b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723683536059232592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c4909f1088272f99373d9c6c535612dcbc5a9280a4248f7612cd2b871ed27d,PodSandboxId:aca1c8c059dc6fcc588bbf8a022ec41988aa33965b94e573f32106f448f433ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723683502548612310,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcf1beb1bc92cebc59ec3fcd8e8188a7715e034929c6e140a15f8f1607b21eb,PodSandboxId:84d0e2e7ed71f2d746c72da4542331af3b3d3f6c8a6650a6004d930f3b58eb02,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723683502453590261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8808d72fd47a8f13ba4db52121147025d9a43d98ae4dd12cb82e5f1d4fb953,PodSandboxId:4bc92df2419c1400d0fdebc5b09f113e30dc6c167b9c1af0641b31262f2a0f8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723683502417139789,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee3bcd285e9df7e4bb10e968ec4c925393549948ecec928932893c721b7ee5e,PodSandboxId:e28fe438bc0c258d027aa48b1707ad1ae448518e5164c0f95e295121dea83d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723683502318836723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d77133fc7b4e846c266aa900382bffd31131ad078c4c09a793ed9d21fd1f8cfc,PodSandboxId:80cf7a8ac2d8c2b926374fc91fc186f68b48b07c0a66d7444367b8f8909680f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723683497518771726,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faada8a4242393b05c2a0a978a64346c85fa05eb86647a47d7f96d44ea8591c8,PodSandboxId:38c963d11d6ca2eb4aeb24b07e5a3e82900ec2d0f28e1c9972d9aad17e0648fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723683497512495149,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef69db1b2a37fbdaf3f2bd7f4a9cc02236af37964017d8ec990faa80544d03a8,PodSandboxId:0138fd75175495a00c5ac5d424db95d085871855ec0538bba7b7cc89c8d7e788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723683497477648607,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e855a6e97f20c22d0ce060992e1912bff0aacd36cc3a800b3a287f2648d7556c,PodSandboxId:07367e8e3488ffbf080d4e38bab34939266a0f944a4ee6404505d6d244ea1942,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723683497423524388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051,PodSandboxId:a1d0190337c10341a25c9d5d3159cbc924fe66561dd8810c1b8b820f1822419d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723683494085462147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:800515c9ab5a8951cb047cfe97b369811eb85f1d6608c5e5a3abd71d37f2827f,PodSandboxId:6b4d4b0ac1a32ec18d3987e1ad8ca4f1ff7ee235af55ffedd49905c34e1f0113,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723683177240946025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e4139a30c48f640d8e98f1ba952283af88959631c1f2342cea281b3bde60ad,PodSandboxId:e349553d11879763183387850a348109f53da17bd7a3bb4566e73e1d4c6f5a3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723683122490434213,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e,PodSandboxId:a93c061b3b0563c6f9077505cb45eaa972c012f6ef7373c32a29f5bbe2fb8377,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723683110885743702,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d,PodSandboxId:2eafab9d119accedfaed33a30f78d3401d2714e84fbb17f08afa2a3cd5743e79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723683107484957880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff,PodSandboxId:0ffa578248454e7c2ca3dd67bf1d25e222119114f8dabc823007271919e12aa0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723683096690245393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063,PodSandboxId:a1e7e4c32d43de14e34587e1e59366bc206a64252ed8430822be9c131a9dba8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723683096687056390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92
aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b,PodSandboxId:a58ecc268ed541798a0064360e5f94dad6cfb94d0187de75659f35d14015daee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723683096594819224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9,PodSandboxId:a5e805766ccb471132d7e0afe8d3b80c5f55f54cfd921f8eedfd4c685cc90f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723683096637980891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6de520f3-4b12-4f90-a7da-5d2550803f24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.905714813Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=501029b4-05f2-4d1c-a4d9-493d7205cde4 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.905790569Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=501029b4-05f2-4d1c-a4d9-493d7205cde4 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.907117910Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01b67f5f-d2c8-4442-a046-232b081e7d1d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.907624057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683745907595423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01b67f5f-d2c8-4442-a046-232b081e7d1d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.908459088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4846f07b-d5bd-417d-ad96-08b4060098b3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.908517971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4846f07b-d5bd-417d-ad96-08b4060098b3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.910546873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:120eb7a5322b4daf2ee1a0cfb9b63388cdfc4e469a5db10b84f10cf47c8d5254,PodSandboxId:16ad6434f062d6d50485494821593edf7dbf293221c7f278c3042dcd0388648b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723683536059232592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c4909f1088272f99373d9c6c535612dcbc5a9280a4248f7612cd2b871ed27d,PodSandboxId:aca1c8c059dc6fcc588bbf8a022ec41988aa33965b94e573f32106f448f433ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723683502548612310,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcf1beb1bc92cebc59ec3fcd8e8188a7715e034929c6e140a15f8f1607b21eb,PodSandboxId:84d0e2e7ed71f2d746c72da4542331af3b3d3f6c8a6650a6004d930f3b58eb02,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723683502453590261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8808d72fd47a8f13ba4db52121147025d9a43d98ae4dd12cb82e5f1d4fb953,PodSandboxId:4bc92df2419c1400d0fdebc5b09f113e30dc6c167b9c1af0641b31262f2a0f8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723683502417139789,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee3bcd285e9df7e4bb10e968ec4c925393549948ecec928932893c721b7ee5e,PodSandboxId:e28fe438bc0c258d027aa48b1707ad1ae448518e5164c0f95e295121dea83d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723683502318836723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d77133fc7b4e846c266aa900382bffd31131ad078c4c09a793ed9d21fd1f8cfc,PodSandboxId:80cf7a8ac2d8c2b926374fc91fc186f68b48b07c0a66d7444367b8f8909680f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723683497518771726,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faada8a4242393b05c2a0a978a64346c85fa05eb86647a47d7f96d44ea8591c8,PodSandboxId:38c963d11d6ca2eb4aeb24b07e5a3e82900ec2d0f28e1c9972d9aad17e0648fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723683497512495149,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef69db1b2a37fbdaf3f2bd7f4a9cc02236af37964017d8ec990faa80544d03a8,PodSandboxId:0138fd75175495a00c5ac5d424db95d085871855ec0538bba7b7cc89c8d7e788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723683497477648607,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e855a6e97f20c22d0ce060992e1912bff0aacd36cc3a800b3a287f2648d7556c,PodSandboxId:07367e8e3488ffbf080d4e38bab34939266a0f944a4ee6404505d6d244ea1942,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723683497423524388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051,PodSandboxId:a1d0190337c10341a25c9d5d3159cbc924fe66561dd8810c1b8b820f1822419d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723683494085462147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:800515c9ab5a8951cb047cfe97b369811eb85f1d6608c5e5a3abd71d37f2827f,PodSandboxId:6b4d4b0ac1a32ec18d3987e1ad8ca4f1ff7ee235af55ffedd49905c34e1f0113,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723683177240946025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e4139a30c48f640d8e98f1ba952283af88959631c1f2342cea281b3bde60ad,PodSandboxId:e349553d11879763183387850a348109f53da17bd7a3bb4566e73e1d4c6f5a3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723683122490434213,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e,PodSandboxId:a93c061b3b0563c6f9077505cb45eaa972c012f6ef7373c32a29f5bbe2fb8377,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723683110885743702,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d,PodSandboxId:2eafab9d119accedfaed33a30f78d3401d2714e84fbb17f08afa2a3cd5743e79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723683107484957880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff,PodSandboxId:0ffa578248454e7c2ca3dd67bf1d25e222119114f8dabc823007271919e12aa0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723683096690245393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063,PodSandboxId:a1e7e4c32d43de14e34587e1e59366bc206a64252ed8430822be9c131a9dba8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723683096687056390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92
aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b,PodSandboxId:a58ecc268ed541798a0064360e5f94dad6cfb94d0187de75659f35d14015daee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723683096594819224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9,PodSandboxId:a5e805766ccb471132d7e0afe8d3b80c5f55f54cfd921f8eedfd4c685cc90f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723683096637980891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4846f07b-d5bd-417d-ad96-08b4060098b3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.953245172Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef1eddfd-a11c-40a7-b8f7-fed9be598f41 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.953335922Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef1eddfd-a11c-40a7-b8f7-fed9be598f41 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.954789666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cdf94be-c76e-4dfa-b263-a86ee3a6aab2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.955626030Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683745955597019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cdf94be-c76e-4dfa-b263-a86ee3a6aab2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.956324265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6a30194-8374-41e9-85ed-72fba926d958 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.956401020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6a30194-8374-41e9-85ed-72fba926d958 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:02:25 multinode-978269 crio[2868]: time="2024-08-15 01:02:25.956762009Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:120eb7a5322b4daf2ee1a0cfb9b63388cdfc4e469a5db10b84f10cf47c8d5254,PodSandboxId:16ad6434f062d6d50485494821593edf7dbf293221c7f278c3042dcd0388648b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723683536059232592,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c4909f1088272f99373d9c6c535612dcbc5a9280a4248f7612cd2b871ed27d,PodSandboxId:aca1c8c059dc6fcc588bbf8a022ec41988aa33965b94e573f32106f448f433ed,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723683502548612310,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fcf1beb1bc92cebc59ec3fcd8e8188a7715e034929c6e140a15f8f1607b21eb,PodSandboxId:84d0e2e7ed71f2d746c72da4542331af3b3d3f6c8a6650a6004d930f3b58eb02,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723683502453590261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8808d72fd47a8f13ba4db52121147025d9a43d98ae4dd12cb82e5f1d4fb953,PodSandboxId:4bc92df2419c1400d0fdebc5b09f113e30dc6c167b9c1af0641b31262f2a0f8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723683502417139789,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ee3bcd285e9df7e4bb10e968ec4c925393549948ecec928932893c721b7ee5e,PodSandboxId:e28fe438bc0c258d027aa48b1707ad1ae448518e5164c0f95e295121dea83d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723683502318836723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d77133fc7b4e846c266aa900382bffd31131ad078c4c09a793ed9d21fd1f8cfc,PodSandboxId:80cf7a8ac2d8c2b926374fc91fc186f68b48b07c0a66d7444367b8f8909680f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723683497518771726,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:faada8a4242393b05c2a0a978a64346c85fa05eb86647a47d7f96d44ea8591c8,PodSandboxId:38c963d11d6ca2eb4aeb24b07e5a3e82900ec2d0f28e1c9972d9aad17e0648fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723683497512495149,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef69db1b2a37fbdaf3f2bd7f4a9cc02236af37964017d8ec990faa80544d03a8,PodSandboxId:0138fd75175495a00c5ac5d424db95d085871855ec0538bba7b7cc89c8d7e788,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723683497477648607,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e855a6e97f20c22d0ce060992e1912bff0aacd36cc3a800b3a287f2648d7556c,PodSandboxId:07367e8e3488ffbf080d4e38bab34939266a0f944a4ee6404505d6d244ea1942,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723683497423524388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051,PodSandboxId:a1d0190337c10341a25c9d5d3159cbc924fe66561dd8810c1b8b820f1822419d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723683494085462147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z2fdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d896218-56cb-44a1-9f4e-9d1edd0df78d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:800515c9ab5a8951cb047cfe97b369811eb85f1d6608c5e5a3abd71d37f2827f,PodSandboxId:6b4d4b0ac1a32ec18d3987e1ad8ca4f1ff7ee235af55ffedd49905c34e1f0113,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723683177240946025,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7t6jw,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea3a5b0e-dbec-4ac6-af75-f6c3417b70be,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e4139a30c48f640d8e98f1ba952283af88959631c1f2342cea281b3bde60ad,PodSandboxId:e349553d11879763183387850a348109f53da17bd7a3bb4566e73e1d4c6f5a3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723683122490434213,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e4b4a2fa-35b0-4406-b5b8-eb90963b4c49,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e,PodSandboxId:a93c061b3b0563c6f9077505cb45eaa972c012f6ef7373c32a29f5bbe2fb8377,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723683110885743702,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jtg5x,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: d390f416-a09a-4ffa-a373-578f570f375e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d,PodSandboxId:2eafab9d119accedfaed33a30f78d3401d2714e84fbb17f08afa2a3cd5743e79,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723683107484957880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dv78,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a03c1ea6-c4b1-427e-8006-6efe52f6d083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff,PodSandboxId:0ffa578248454e7c2ca3dd67bf1d25e222119114f8dabc823007271919e12aa0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723683096690245393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
125d323b92aa2203c302ca61021765,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063,PodSandboxId:a1e7e4c32d43de14e34587e1e59366bc206a64252ed8430822be9c131a9dba8a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723683096687056390,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf19bf1a154a73f92
aaa2a01c231c958,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b,PodSandboxId:a58ecc268ed541798a0064360e5f94dad6cfb94d0187de75659f35d14015daee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723683096594819224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 196095a5ba6a996617055641ff0cf4cf,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9,PodSandboxId:a5e805766ccb471132d7e0afe8d3b80c5f55f54cfd921f8eedfd4c685cc90f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723683096637980891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-978269,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72e0a05e66161bc7a171a5dd8d3a65c,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6a30194-8374-41e9-85ed-72fba926d958 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	120eb7a5322b4       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   16ad6434f062d       busybox-7dff88458-7t6jw
	48c4909f10882       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   aca1c8c059dc6       kindnet-jtg5x
	4fcf1beb1bc92       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   2                   84d0e2e7ed71f       coredns-6f6b679f8f-z2fdx
	8c8808d72fd47       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   4bc92df2419c1       kube-proxy-9dv78
	3ee3bcd285e9d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   e28fe438bc0c2       storage-provisioner
	d77133fc7b4e8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   80cf7a8ac2d8c       etcd-multinode-978269
	faada8a424239       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   38c963d11d6ca       kube-scheduler-multinode-978269
	ef69db1b2a37f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   0138fd7517549       kube-controller-manager-multinode-978269
	e855a6e97f20c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   07367e8e3488f       kube-apiserver-multinode-978269
	8bd26deb668b8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   a1d0190337c10       coredns-6f6b679f8f-z2fdx
	800515c9ab5a8       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   6b4d4b0ac1a32       busybox-7dff88458-7t6jw
	22e4139a30c48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   e349553d11879       storage-provisioner
	8f29be96a4aa4       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   a93c061b3b056       kindnet-jtg5x
	d84e329513e70       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   2eafab9d119ac       kube-proxy-9dv78
	a0e3afa8b91de       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   0ffa578248454       kube-scheduler-multinode-978269
	5a6497a8901c2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   a1e7e4c32d43d       kube-apiserver-multinode-978269
	1295ded1643dc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   a5e805766ccb4       kube-controller-manager-multinode-978269
	60d7fb737c967       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   a58ecc268ed54       etcd-multinode-978269
	
	
	==> coredns [4fcf1beb1bc92cebc59ec3fcd8e8188a7715e034929c6e140a15f8f1607b21eb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42588 - 4420 "HINFO IN 424660939412603124.5981377023232911938. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011763689s
	
	
	==> coredns [8bd26deb668b879e88fb3cbd8ef0334ac2af9dced53a482cf56c9eb9950ee051] <==
	
	
	==> describe nodes <==
	Name:               multinode-978269
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-978269
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=multinode-978269
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T00_51_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:51:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-978269
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:02:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 00:58:21 +0000   Thu, 15 Aug 2024 00:51:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 00:58:21 +0000   Thu, 15 Aug 2024 00:51:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 00:58:21 +0000   Thu, 15 Aug 2024 00:51:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 00:58:21 +0000   Thu, 15 Aug 2024 00:52:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    multinode-978269
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 011be81033174bab9baea31821c8cceb
	  System UUID:                011be810-3317-4bab-9bae-a31821c8cceb
	  Boot ID:                    321329e1-47f2-4460-8db4-7c9aee80ba74
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7t6jw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 coredns-6f6b679f8f-z2fdx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-978269                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-jtg5x                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-978269             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-978269    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-9dv78                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-978269             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-978269 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-978269 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-978269 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-978269 event: Registered Node multinode-978269 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-978269 status is now: NodeReady
	  Normal  Starting                 4m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node multinode-978269 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node multinode-978269 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node multinode-978269 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                   node-controller  Node multinode-978269 event: Registered Node multinode-978269 in Controller
	
	
	Name:               multinode-978269-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-978269-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=multinode-978269
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T00_59_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 00:59:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-978269-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:00:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 00:59:32 +0000   Thu, 15 Aug 2024 01:00:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 00:59:32 +0000   Thu, 15 Aug 2024 01:00:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 00:59:32 +0000   Thu, 15 Aug 2024 01:00:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 00:59:32 +0000   Thu, 15 Aug 2024 01:00:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    multinode-978269-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 feb260eb0f094c598c21db9a6f456d5b
	  System UUID:                feb260eb-0f09-4c59-8c21-db9a6f456d5b
	  Boot ID:                    aae55ab2-4686-4046-9a2b-85273ca11b87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wcqhk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-p5zrg              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m54s
	  kube-system                 kube-proxy-mstc7           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 9m49s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  9m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m54s (x2 over 9m55s)  kubelet          Node multinode-978269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m54s (x2 over 9m55s)  kubelet          Node multinode-978269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m54s (x2 over 9m55s)  kubelet          Node multinode-978269-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m35s                  kubelet          Node multinode-978269-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m25s)  kubelet          Node multinode-978269-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m25s)  kubelet          Node multinode-978269-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m25s)  kubelet          Node multinode-978269-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-978269-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-978269-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055972] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054978] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.161102] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.126423] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.270525] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.756944] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.826463] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.060576] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.993565] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[  +0.072326] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.120277] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.100496] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.044882] kauditd_printk_skb: 68 callbacks suppressed
	[Aug15 00:52] kauditd_printk_skb: 14 callbacks suppressed
	[Aug15 00:58] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[  +0.140718] systemd-fstab-generator[2698]: Ignoring "noauto" option for root device
	[  +0.160625] systemd-fstab-generator[2712]: Ignoring "noauto" option for root device
	[  +0.146252] systemd-fstab-generator[2724]: Ignoring "noauto" option for root device
	[  +0.357448] systemd-fstab-generator[2805]: Ignoring "noauto" option for root device
	[  +0.752274] systemd-fstab-generator[2976]: Ignoring "noauto" option for root device
	[  +1.759861] systemd-fstab-generator[3099]: Ignoring "noauto" option for root device
	[  +5.644772] kauditd_printk_skb: 196 callbacks suppressed
	[  +6.561631] kauditd_printk_skb: 34 callbacks suppressed
	[  +8.849715] systemd-fstab-generator[3956]: Ignoring "noauto" option for root device
	[ +18.385785] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [60d7fb737c967f6ee885ed37fe9c69cfa873b46573560ee3811db172ba74ca0b] <==
	{"level":"warn","ts":"2024-08-15T00:52:31.903474Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.910054ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:52:31.903586Z","caller":"traceutil/trace.go:171","msg":"trace[1043404670] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:442; }","duration":"126.098711ms","start":"2024-08-15T00:52:31.777466Z","end":"2024-08-15T00:52:31.903565Z","steps":["trace[1043404670] 'range keys from in-memory index tree'  (duration: 125.886659ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:52:31.903718Z","caller":"traceutil/trace.go:171","msg":"trace[1405404509] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"211.727279ms","start":"2024-08-15T00:52:31.691982Z","end":"2024-08-15T00:52:31.903709Z","steps":["trace[1405404509] 'process raft request'  (duration: 136.334786ms)","trace[1405404509] 'compare'  (duration: 74.660063ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:52:37.951499Z","caller":"traceutil/trace.go:171","msg":"trace[1970639582] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"142.515217ms","start":"2024-08-15T00:52:37.808963Z","end":"2024-08-15T00:52:37.951478Z","steps":["trace[1970639582] 'process raft request'  (duration: 142.387713ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:52:38.262259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.560521ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6583015068228233705 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:457 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T00:52:38.262406Z","caller":"traceutil/trace.go:171","msg":"trace[1249122724] linearizableReadLoop","detail":"{readStateIndex:504; appliedIndex:503; }","duration":"137.737379ms","start":"2024-08-15T00:52:38.124657Z","end":"2024-08-15T00:52:38.262395Z","steps":["trace[1249122724] 'read index received'  (duration: 35.3295ms)","trace[1249122724] 'applied index is now lower than readState.Index'  (duration: 102.406739ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:52:38.262505Z","caller":"traceutil/trace.go:171","msg":"trace[1274586763] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"303.648575ms","start":"2024-08-15T00:52:37.958844Z","end":"2024-08-15T00:52:38.262493Z","steps":["trace[1274586763] 'process raft request'  (duration: 201.349685ms)","trace[1274586763] 'compare'  (duration: 101.428191ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T00:52:38.262597Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.931681ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-978269-m02\" ","response":"range_response_count:1 size:2887"}
	{"level":"warn","ts":"2024-08-15T00:52:38.262607Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T00:52:37.958826Z","time spent":"303.735227ms","remote":"127.0.0.1:57356","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:457 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"info","ts":"2024-08-15T00:52:38.262637Z","caller":"traceutil/trace.go:171","msg":"trace[1723559130] range","detail":"{range_begin:/registry/minions/multinode-978269-m02; range_end:; response_count:1; response_revision:481; }","duration":"137.976275ms","start":"2024-08-15T00:52:38.124654Z","end":"2024-08-15T00:52:38.262630Z","steps":["trace[1723559130] 'agreement among raft nodes before linearized reading'  (duration: 137.873227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T00:53:27.784585Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.164456ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6583015068228234127 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-978269-m03.17ebc0beaa74d324\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-978269-m03.17ebc0beaa74d324\" value_size:642 lease:6583015068228233819 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T00:53:27.784756Z","caller":"traceutil/trace.go:171","msg":"trace[237720874] linearizableReadLoop","detail":"{readStateIndex:612; appliedIndex:611; }","duration":"134.319766ms","start":"2024-08-15T00:53:27.650419Z","end":"2024-08-15T00:53:27.784739Z","steps":["trace[237720874] 'read index received'  (duration: 3.824125ms)","trace[237720874] 'applied index is now lower than readState.Index'  (duration: 130.494928ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T00:53:27.784829Z","caller":"traceutil/trace.go:171","msg":"trace[305826630] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"212.090036ms","start":"2024-08-15T00:53:27.572726Z","end":"2024-08-15T00:53:27.784816Z","steps":["trace[305826630] 'process raft request'  (duration: 81.588395ms)","trace[305826630] 'compare'  (duration: 130.077002ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T00:53:27.785060Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.632125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-978269-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T00:53:27.785095Z","caller":"traceutil/trace.go:171","msg":"trace[192360410] range","detail":"{range_begin:/registry/minions/multinode-978269-m03; range_end:; response_count:0; response_revision:578; }","duration":"134.673362ms","start":"2024-08-15T00:53:27.650415Z","end":"2024-08-15T00:53:27.785088Z","steps":["trace[192360410] 'agreement among raft nodes before linearized reading'  (duration: 134.617915ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T00:56:42.025316Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-15T00:56:42.025465Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-978269","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.9:2380"],"advertise-client-urls":["https://192.168.39.9:2379"]}
	{"level":"warn","ts":"2024-08-15T00:56:42.025613Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T00:56:42.025715Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T00:56:42.084897Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.9:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T00:56:42.084985Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.9:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T00:56:42.085073Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e6c05fccff8d5b5b","current-leader-member-id":"e6c05fccff8d5b5b"}
	{"level":"info","ts":"2024-08-15T00:56:42.088370Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.9:2380"}
	{"level":"info","ts":"2024-08-15T00:56:42.088494Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.9:2380"}
	{"level":"info","ts":"2024-08-15T00:56:42.088514Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-978269","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.9:2380"],"advertise-client-urls":["https://192.168.39.9:2379"]}
	
	
	==> etcd [d77133fc7b4e846c266aa900382bffd31131ad078c4c09a793ed9d21fd1f8cfc] <==
	{"level":"info","ts":"2024-08-15T00:58:17.963578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b switched to configuration voters=(16627395158317292379)"}
	{"level":"info","ts":"2024-08-15T00:58:17.963668Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e83eb6b012f1d297","local-member-id":"e6c05fccff8d5b5b","added-peer-id":"e6c05fccff8d5b5b","added-peer-peer-urls":["https://192.168.39.9:2380"]}
	{"level":"info","ts":"2024-08-15T00:58:17.964312Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e83eb6b012f1d297","local-member-id":"e6c05fccff8d5b5b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T00:58:17.971395Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T00:58:17.990244Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T00:58:17.990516Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e6c05fccff8d5b5b","initial-advertise-peer-urls":["https://192.168.39.9:2380"],"listen-peer-urls":["https://192.168.39.9:2380"],"advertise-client-urls":["https://192.168.39.9:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.9:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T00:58:17.990554Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T00:58:17.990698Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.9:2380"}
	{"level":"info","ts":"2024-08-15T00:58:17.990719Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.9:2380"}
	{"level":"info","ts":"2024-08-15T00:58:19.677802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T00:58:19.677863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T00:58:19.677907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b received MsgPreVoteResp from e6c05fccff8d5b5b at term 2"}
	{"level":"info","ts":"2024-08-15T00:58:19.677928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T00:58:19.677936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b received MsgVoteResp from e6c05fccff8d5b5b at term 3"}
	{"level":"info","ts":"2024-08-15T00:58:19.677947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6c05fccff8d5b5b became leader at term 3"}
	{"level":"info","ts":"2024-08-15T00:58:19.677957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e6c05fccff8d5b5b elected leader e6c05fccff8d5b5b at term 3"}
	{"level":"info","ts":"2024-08-15T00:58:19.684010Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T00:58:19.685136Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T00:58:19.683968Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e6c05fccff8d5b5b","local-member-attributes":"{Name:multinode-978269 ClientURLs:[https://192.168.39.9:2379]}","request-path":"/0/members/e6c05fccff8d5b5b/attributes","cluster-id":"e83eb6b012f1d297","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T00:58:19.685497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T00:58:19.685834Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T00:58:19.685897Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T00:58:19.686515Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.9:2379"}
	{"level":"info","ts":"2024-08-15T00:58:19.686776Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T00:58:19.688034Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:02:26 up 11 min,  0 users,  load average: 0.20, 0.29, 0.25
	Linux multinode-978269 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [48c4909f1088272f99373d9c6c535612dcbc5a9280a4248f7612cd2b871ed27d] <==
	I0815 01:01:23.410565       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 01:01:33.419407       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 01:01:33.419461       1 main.go:299] handling current node
	I0815 01:01:33.419496       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 01:01:33.419502       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 01:01:43.414340       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 01:01:43.414451       1 main.go:299] handling current node
	I0815 01:01:43.414480       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 01:01:43.414498       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 01:01:53.410449       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 01:01:53.410594       1 main.go:299] handling current node
	I0815 01:01:53.410634       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 01:01:53.410652       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 01:02:03.415296       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 01:02:03.415339       1 main.go:299] handling current node
	I0815 01:02:03.415358       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 01:02:03.415363       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 01:02:13.409697       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 01:02:13.409791       1 main.go:299] handling current node
	I0815 01:02:13.409823       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 01:02:13.409829       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 01:02:23.410558       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 01:02:23.410614       1 main.go:299] handling current node
	I0815 01:02:23.410629       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 01:02:23.410635       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [8f29be96a4aa4a647f5c3e34d0a89708c630bd7ab622d6437cfa7f5cdc40e35e] <==
	I0815 00:56:01.797788       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:56:11.799108       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0815 00:56:11.799222       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:56:11.799367       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 00:56:11.799388       1 main.go:299] handling current node
	I0815 00:56:11.799414       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 00:56:11.799431       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 00:56:21.803628       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 00:56:21.803693       1 main.go:299] handling current node
	I0815 00:56:21.803714       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 00:56:21.803724       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 00:56:21.803905       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0815 00:56:21.803925       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:56:31.796812       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 00:56:31.796886       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	I0815 00:56:31.797081       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0815 00:56:31.797102       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:56:31.797206       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 00:56:31.797226       1 main.go:299] handling current node
	I0815 00:56:41.798599       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0815 00:56:41.798687       1 main.go:322] Node multinode-978269-m03 has CIDR [10.244.3.0/24] 
	I0815 00:56:41.798879       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0815 00:56:41.798889       1 main.go:299] handling current node
	I0815 00:56:41.798926       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0815 00:56:41.798943       1 main.go:322] Node multinode-978269-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5a6497a8901c2354a41cca5362b7c83105c4e98c4a01bc6ae241a11daed8d063] <==
	I0815 00:51:40.127222       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0815 00:51:40.127333       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 00:51:40.755852       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 00:51:40.798997       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 00:51:40.928890       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0815 00:51:40.937124       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.9]
	I0815 00:51:40.938956       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 00:51:40.946055       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 00:51:41.179495       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 00:51:41.812103       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 00:51:41.831591       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0815 00:51:41.842266       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 00:51:46.784806       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0815 00:51:46.840511       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0815 00:52:58.454951       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37682: use of closed network connection
	E0815 00:52:58.641955       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37704: use of closed network connection
	E0815 00:52:58.810757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37722: use of closed network connection
	E0815 00:52:58.974447       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37734: use of closed network connection
	E0815 00:52:59.140548       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37760: use of closed network connection
	E0815 00:52:59.299594       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37770: use of closed network connection
	E0815 00:52:59.571302       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37792: use of closed network connection
	E0815 00:52:59.735482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37818: use of closed network connection
	E0815 00:52:59.895388       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37824: use of closed network connection
	E0815 00:53:00.054671       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37840: use of closed network connection
	I0815 00:56:42.024307       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [e855a6e97f20c22d0ce060992e1912bff0aacd36cc3a800b3a287f2648d7556c] <==
	I0815 00:58:21.052201       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 00:58:21.052322       1 policy_source.go:224] refreshing policies
	I0815 00:58:21.055662       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 00:58:21.055712       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 00:58:21.055719       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 00:58:21.056992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 00:58:21.058407       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 00:58:21.060112       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 00:58:21.060393       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 00:58:21.060540       1 aggregator.go:171] initial CRD sync complete...
	I0815 00:58:21.060579       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 00:58:21.060601       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 00:58:21.060623       1 cache.go:39] Caches are synced for autoregister controller
	I0815 00:58:21.062891       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 00:58:21.062969       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 00:58:21.063905       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0815 00:58:21.075747       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0815 00:58:21.870747       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 00:58:23.329474       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 00:58:23.444891       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 00:58:23.458658       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 00:58:23.533627       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 00:58:23.540400       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 00:58:24.662941       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 00:58:24.712073       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1295ded1643dca4c24db6c3f853b2554dd59c71aeaa855109f3be5ce004788a9] <==
	I0815 00:54:16.398309       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:16.398887       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m02"
	I0815 00:54:17.641070       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-978269-m03\" does not exist"
	I0815 00:54:17.641789       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m02"
	I0815 00:54:17.651221       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-978269-m03" podCIDRs=["10.244.3.0/24"]
	I0815 00:54:17.651252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:17.654584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:17.659861       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:17.892896       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:18.216625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:21.391406       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:27.996419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:36.791876       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m02"
	I0815 00:54:36.792084       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:36.801088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:54:41.369429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:55:16.385336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 00:55:16.385680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m03"
	I0815 00:55:16.408687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 00:55:16.422510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.267378ms"
	I0815 00:55:16.422592       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.177µs"
	I0815 00:55:21.438461       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:55:21.453032       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:55:21.475476       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 00:55:31.545822       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	
	
	==> kube-controller-manager [ef69db1b2a37fbdaf3f2bd7f4a9cc02236af37964017d8ec990faa80544d03a8] <==
	I0815 00:59:40.600753       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-978269-m03" podCIDRs=["10.244.2.0/24"]
	I0815 00:59:40.602220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:40.602278       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:40.609826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:40.798492       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:41.141587       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:44.469677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:50.839618       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:59.689879       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m02"
	I0815 00:59:59.690450       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 00:59:59.702126       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 01:00:04.183331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 01:00:04.207416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 01:00:04.421913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 01:00:04.635020       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-978269-m02"
	I0815 01:00:04.635416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m03"
	I0815 01:00:44.440670       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 01:00:44.464988       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 01:00:44.468611       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.815484ms"
	I0815 01:00:44.468773       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.238µs"
	I0815 01:00:49.509607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-978269-m02"
	I0815 01:01:04.299231       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-qn9xq"
	I0815 01:01:04.322513       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-qn9xq"
	I0815 01:01:04.322560       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-sj276"
	I0815 01:01:04.347080       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-sj276"
	
	
	==> kube-proxy [8c8808d72fd47a8f13ba4db52121147025d9a43d98ae4dd12cb82e5f1d4fb953] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 00:58:22.688563       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 00:58:22.700275       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.9"]
	E0815 00:58:22.700352       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:58:22.748852       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 00:58:22.748921       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 00:58:22.748950       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:58:22.750977       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:58:22.751277       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:58:22.751299       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:58:22.753068       1 config.go:197] "Starting service config controller"
	I0815 00:58:22.753106       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:58:22.753126       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:58:22.753130       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:58:22.753602       1 config.go:326] "Starting node config controller"
	I0815 00:58:22.753628       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:58:22.853439       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:58:22.853479       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:58:22.853704       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d84e329513e703318a5d77193fbb5575a366f47d95a140a41c6eba7e9a8dca7d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 00:51:47.717868       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 00:51:47.728685       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.9"]
	E0815 00:51:47.728828       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 00:51:47.755665       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 00:51:47.755693       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 00:51:47.755720       1 server_linux.go:169] "Using iptables Proxier"
	I0815 00:51:47.758896       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 00:51:47.759249       1 server.go:483] "Version info" version="v1.31.0"
	I0815 00:51:47.759396       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:51:47.760690       1 config.go:197] "Starting service config controller"
	I0815 00:51:47.760855       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 00:51:47.760903       1 config.go:104] "Starting endpoint slice config controller"
	I0815 00:51:47.760920       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 00:51:47.762563       1 config.go:326] "Starting node config controller"
	I0815 00:51:47.762653       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 00:51:47.861753       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 00:51:47.861867       1 shared_informer.go:320] Caches are synced for service config
	I0815 00:51:47.863148       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a0e3afa8b91dee6d0c5d514cb9e17b298ed508558d384e241dd3863668c2b6ff] <==
	E0815 00:51:39.230329       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 00:51:40.037252       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 00:51:40.037420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.078506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 00:51:40.078609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.092526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 00:51:40.092604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.102852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 00:51:40.102932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.137808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 00:51:40.137909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.157530       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 00:51:40.157623       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 00:51:40.160708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 00:51:40.160749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.236272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 00:51:40.236316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.441243       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 00:51:40.441345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.546882       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 00:51:40.547031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 00:51:40.561875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 00:51:40.561960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0815 00:51:43.420473       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 00:56:42.031882       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [faada8a4242393b05c2a0a978a64346c85fa05eb86647a47d7f96d44ea8591c8] <==
	I0815 00:58:18.279718       1 serving.go:386] Generated self-signed cert in-memory
	W0815 00:58:20.928397       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 00:58:20.928436       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 00:58:20.928446       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 00:58:20.928460       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 00:58:20.972116       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 00:58:20.977233       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 00:58:20.986835       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 00:58:20.987003       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 00:58:20.987053       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 00:58:20.987081       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 00:58:21.087478       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 01:01:06 multinode-978269 kubelet[3106]: E0815 01:01:06.908546    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683666908304120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:01:16 multinode-978269 kubelet[3106]: E0815 01:01:16.861187    3106 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 01:01:16 multinode-978269 kubelet[3106]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 01:01:16 multinode-978269 kubelet[3106]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 01:01:16 multinode-978269 kubelet[3106]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 01:01:16 multinode-978269 kubelet[3106]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 01:01:16 multinode-978269 kubelet[3106]: E0815 01:01:16.909718    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683676909253844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:01:16 multinode-978269 kubelet[3106]: E0815 01:01:16.909769    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683676909253844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:01:26 multinode-978269 kubelet[3106]: E0815 01:01:26.910781    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683686910564952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:01:26 multinode-978269 kubelet[3106]: E0815 01:01:26.910803    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683686910564952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:01:36 multinode-978269 kubelet[3106]: E0815 01:01:36.913318    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683696911998310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:01:36 multinode-978269 kubelet[3106]: E0815 01:01:36.913364    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683696911998310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:01:46 multinode-978269 kubelet[3106]: E0815 01:01:46.915887    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683706915301696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:01:46 multinode-978269 kubelet[3106]: E0815 01:01:46.915985    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683706915301696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:01:56 multinode-978269 kubelet[3106]: E0815 01:01:56.918109    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683716917528343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:01:56 multinode-978269 kubelet[3106]: E0815 01:01:56.918147    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683716917528343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:02:06 multinode-978269 kubelet[3106]: E0815 01:02:06.919510    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683726919275864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:02:06 multinode-978269 kubelet[3106]: E0815 01:02:06.919569    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683726919275864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:02:16 multinode-978269 kubelet[3106]: E0815 01:02:16.861839    3106 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 01:02:16 multinode-978269 kubelet[3106]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 01:02:16 multinode-978269 kubelet[3106]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 01:02:16 multinode-978269 kubelet[3106]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 01:02:16 multinode-978269 kubelet[3106]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 01:02:16 multinode-978269 kubelet[3106]: E0815 01:02:16.920952    3106 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683736920430878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:02:16 multinode-978269 kubelet[3106]: E0815 01:02:16.920974    3106 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723683736920430878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:02:25.557738   51389 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19443-13088/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-978269 -n multinode-978269
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-978269 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.31s)

                                                
                                    
x
+
TestPreload (278.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-851063 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-851063 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m15.815242821s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-851063 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-851063 image pull gcr.io/k8s-minikube/busybox: (2.817813376s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-851063
E0815 01:08:28.709104   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:08:45.640860   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:09:41.523494   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-851063: exit status 82 (2m0.46253116s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-851063"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-851063 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-15 01:10:28.363090565 +0000 UTC m=+3903.063322161
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-851063 -n test-preload-851063
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-851063 -n test-preload-851063: exit status 3 (18.617188532s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:10:46.977022   54269 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host
	E0815 01:10:46.977041   54269 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-851063" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-851063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-851063
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-851063: (1.0948779s)
--- FAIL: TestPreload (278.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (410.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-146394 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0815 01:13:45.640924   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-146394 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m2.968674568s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-146394] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-146394" primary control-plane node in "kubernetes-upgrade-146394" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:13:38.382472   58243 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:13:38.382676   58243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:13:38.382690   58243 out.go:304] Setting ErrFile to fd 2...
	I0815 01:13:38.382697   58243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:13:38.382991   58243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:13:38.383759   58243 out.go:298] Setting JSON to false
	I0815 01:13:38.384979   58243 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6963,"bootTime":1723677455,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:13:38.385069   58243 start.go:139] virtualization: kvm guest
	I0815 01:13:38.390070   58243 out.go:177] * [kubernetes-upgrade-146394] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:13:38.392030   58243 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:13:38.392072   58243 notify.go:220] Checking for updates...
	I0815 01:13:38.393522   58243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:13:38.395243   58243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:13:38.396773   58243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:13:38.397993   58243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:13:38.399233   58243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:13:38.400833   58243 config.go:182] Loaded profile config "NoKubernetes-312183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:13:38.401023   58243 config.go:182] Loaded profile config "offline-crio-278022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:13:38.401128   58243 config.go:182] Loaded profile config "running-upgrade-339919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0815 01:13:38.401251   58243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:13:38.435990   58243 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 01:13:38.437269   58243 start.go:297] selected driver: kvm2
	I0815 01:13:38.437288   58243 start.go:901] validating driver "kvm2" against <nil>
	I0815 01:13:38.437304   58243 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:13:38.437995   58243 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:13:38.438092   58243 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:13:38.453434   58243 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:13:38.453492   58243 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 01:13:38.453723   58243 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 01:13:38.453790   58243 cni.go:84] Creating CNI manager for ""
	I0815 01:13:38.453808   58243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:13:38.453817   58243 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 01:13:38.453887   58243 start.go:340] cluster config:
	{Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:13:38.454022   58243 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:13:38.455484   58243 out.go:177] * Starting "kubernetes-upgrade-146394" primary control-plane node in "kubernetes-upgrade-146394" cluster
	I0815 01:13:38.456951   58243 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 01:13:38.456996   58243 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:13:38.457009   58243 cache.go:56] Caching tarball of preloaded images
	I0815 01:13:38.457092   58243 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:13:38.457105   58243 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 01:13:38.457219   58243 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/config.json ...
	I0815 01:13:38.457243   58243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/config.json: {Name:mk09a5cdc5b261df16e6e09f1987939b187c8102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:13:38.457396   58243 start.go:360] acquireMachinesLock for kubernetes-upgrade-146394: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:14:13.097385   58243 start.go:364] duration metric: took 34.639947715s to acquireMachinesLock for "kubernetes-upgrade-146394"
	I0815 01:14:13.097458   58243 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:14:13.097586   58243 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 01:14:13.099390   58243 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 01:14:13.099572   58243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:14:13.099630   58243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:14:13.118795   58243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I0815 01:14:13.119302   58243 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:14:13.119888   58243 main.go:141] libmachine: Using API Version  1
	I0815 01:14:13.119917   58243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:14:13.120242   58243 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:14:13.120429   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetMachineName
	I0815 01:14:13.120580   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:14:13.120737   58243 start.go:159] libmachine.API.Create for "kubernetes-upgrade-146394" (driver="kvm2")
	I0815 01:14:13.120765   58243 client.go:168] LocalClient.Create starting
	I0815 01:14:13.120796   58243 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem
	I0815 01:14:13.120838   58243 main.go:141] libmachine: Decoding PEM data...
	I0815 01:14:13.120866   58243 main.go:141] libmachine: Parsing certificate...
	I0815 01:14:13.120935   58243 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem
	I0815 01:14:13.120958   58243 main.go:141] libmachine: Decoding PEM data...
	I0815 01:14:13.120977   58243 main.go:141] libmachine: Parsing certificate...
	I0815 01:14:13.120999   58243 main.go:141] libmachine: Running pre-create checks...
	I0815 01:14:13.121011   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .PreCreateCheck
	I0815 01:14:13.121398   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetConfigRaw
	I0815 01:14:13.121840   58243 main.go:141] libmachine: Creating machine...
	I0815 01:14:13.121857   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .Create
	I0815 01:14:13.122013   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Creating KVM machine...
	I0815 01:14:13.123111   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found existing default KVM network
	I0815 01:14:13.124354   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:13.124221   58792 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b6:71:c7} reservation:<nil>}
	I0815 01:14:13.125141   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:13.125050   58792 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fb:4b:55} reservation:<nil>}
	I0815 01:14:13.125936   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:13.125874   58792 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:d2:d8:6f} reservation:<nil>}
	I0815 01:14:13.126991   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:13.126902   58792 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003091b0}
	I0815 01:14:13.127023   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | created network xml: 
	I0815 01:14:13.127036   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | <network>
	I0815 01:14:13.127048   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG |   <name>mk-kubernetes-upgrade-146394</name>
	I0815 01:14:13.127066   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG |   <dns enable='no'/>
	I0815 01:14:13.127076   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG |   
	I0815 01:14:13.127090   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0815 01:14:13.127104   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG |     <dhcp>
	I0815 01:14:13.127115   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0815 01:14:13.127120   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG |     </dhcp>
	I0815 01:14:13.127127   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG |   </ip>
	I0815 01:14:13.127137   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG |   
	I0815 01:14:13.127149   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | </network>
	I0815 01:14:13.127160   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | 
	I0815 01:14:13.132373   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | trying to create private KVM network mk-kubernetes-upgrade-146394 192.168.72.0/24...
	I0815 01:14:13.213898   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Setting up store path in /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394 ...
	I0815 01:14:13.213935   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Building disk image from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 01:14:13.213947   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | private KVM network mk-kubernetes-upgrade-146394 192.168.72.0/24 created
	I0815 01:14:13.213969   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:13.213814   58792 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:14:13.213991   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Downloading /home/jenkins/minikube-integration/19443-13088/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 01:14:13.444711   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:13.444545   58792 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa...
	I0815 01:14:13.527033   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:13.526912   58792 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/kubernetes-upgrade-146394.rawdisk...
	I0815 01:14:13.527062   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Writing magic tar header
	I0815 01:14:13.527080   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Writing SSH key tar header
	I0815 01:14:13.527093   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:13.527039   58792 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394 ...
	I0815 01:14:13.527196   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394
	I0815 01:14:13.527221   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394 (perms=drwx------)
	I0815 01:14:13.527234   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines
	I0815 01:14:13.527249   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:14:13.527261   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088
	I0815 01:14:13.527278   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 01:14:13.527297   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines (perms=drwxr-xr-x)
	I0815 01:14:13.527311   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Checking permissions on dir: /home/jenkins
	I0815 01:14:13.527344   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube (perms=drwxr-xr-x)
	I0815 01:14:13.527365   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Checking permissions on dir: /home
	I0815 01:14:13.527376   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088 (perms=drwxrwxr-x)
	I0815 01:14:13.527388   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Skipping /home - not owner
	I0815 01:14:13.527410   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 01:14:13.527425   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 01:14:13.527441   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Creating domain...
	I0815 01:14:13.528523   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) define libvirt domain using xml: 
	I0815 01:14:13.528548   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) <domain type='kvm'>
	I0815 01:14:13.528561   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)   <name>kubernetes-upgrade-146394</name>
	I0815 01:14:13.528581   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)   <memory unit='MiB'>2200</memory>
	I0815 01:14:13.528595   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)   <vcpu>2</vcpu>
	I0815 01:14:13.528602   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)   <features>
	I0815 01:14:13.528616   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <acpi/>
	I0815 01:14:13.528626   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <apic/>
	I0815 01:14:13.528635   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <pae/>
	I0815 01:14:13.528643   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     
	I0815 01:14:13.528689   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)   </features>
	I0815 01:14:13.528713   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)   <cpu mode='host-passthrough'>
	I0815 01:14:13.528724   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)   
	I0815 01:14:13.528733   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)   </cpu>
	I0815 01:14:13.528745   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)   <os>
	I0815 01:14:13.528753   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <type>hvm</type>
	I0815 01:14:13.528764   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <boot dev='cdrom'/>
	I0815 01:14:13.528775   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <boot dev='hd'/>
	I0815 01:14:13.528794   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <bootmenu enable='no'/>
	I0815 01:14:13.528810   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)   </os>
	I0815 01:14:13.528819   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)   <devices>
	I0815 01:14:13.528828   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <disk type='file' device='cdrom'>
	I0815 01:14:13.528844   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/boot2docker.iso'/>
	I0815 01:14:13.528855   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <target dev='hdc' bus='scsi'/>
	I0815 01:14:13.528866   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <readonly/>
	I0815 01:14:13.528891   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     </disk>
	I0815 01:14:13.528923   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <disk type='file' device='disk'>
	I0815 01:14:13.528941   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 01:14:13.528957   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/kubernetes-upgrade-146394.rawdisk'/>
	I0815 01:14:13.528964   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <target dev='hda' bus='virtio'/>
	I0815 01:14:13.528971   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     </disk>
	I0815 01:14:13.528978   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <interface type='network'>
	I0815 01:14:13.528984   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <source network='mk-kubernetes-upgrade-146394'/>
	I0815 01:14:13.528991   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <model type='virtio'/>
	I0815 01:14:13.528997   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     </interface>
	I0815 01:14:13.529010   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <interface type='network'>
	I0815 01:14:13.529025   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <source network='default'/>
	I0815 01:14:13.529044   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <model type='virtio'/>
	I0815 01:14:13.529054   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     </interface>
	I0815 01:14:13.529076   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <serial type='pty'>
	I0815 01:14:13.529089   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <target port='0'/>
	I0815 01:14:13.529099   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     </serial>
	I0815 01:14:13.529119   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <console type='pty'>
	I0815 01:14:13.529133   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <target type='serial' port='0'/>
	I0815 01:14:13.529145   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     </console>
	I0815 01:14:13.529155   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     <rng model='virtio'>
	I0815 01:14:13.529169   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)       <backend model='random'>/dev/random</backend>
	I0815 01:14:13.529178   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     </rng>
	I0815 01:14:13.529187   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     
	I0815 01:14:13.529196   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)     
	I0815 01:14:13.529218   58243 main.go:141] libmachine: (kubernetes-upgrade-146394)   </devices>
	I0815 01:14:13.529241   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) </domain>
	I0815 01:14:13.529255   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) 
	I0815 01:14:13.533554   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:7f:3c:ed in network default
	I0815 01:14:13.534266   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Ensuring networks are active...
	I0815 01:14:13.534288   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:13.535067   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Ensuring network default is active
	I0815 01:14:13.535454   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Ensuring network mk-kubernetes-upgrade-146394 is active
	I0815 01:14:13.536073   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Getting domain xml...
	I0815 01:14:13.536935   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Creating domain...
	I0815 01:14:14.807158   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Waiting to get IP...
	I0815 01:14:14.807946   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:14.808346   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:14.808393   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:14.808344   58792 retry.go:31] will retry after 214.444767ms: waiting for machine to come up
	I0815 01:14:15.024612   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:15.025088   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:15.025117   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:15.025036   58792 retry.go:31] will retry after 355.523357ms: waiting for machine to come up
	I0815 01:14:15.382290   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:15.382895   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:15.382926   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:15.382836   58792 retry.go:31] will retry after 463.39496ms: waiting for machine to come up
	I0815 01:14:15.847222   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:15.847635   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:15.847664   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:15.847626   58792 retry.go:31] will retry after 574.177696ms: waiting for machine to come up
	I0815 01:14:16.423321   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:16.423838   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:16.423869   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:16.423784   58792 retry.go:31] will retry after 571.063394ms: waiting for machine to come up
	I0815 01:14:16.996421   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:16.996880   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:16.996929   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:16.996815   58792 retry.go:31] will retry after 739.636001ms: waiting for machine to come up
	I0815 01:14:17.738549   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:17.738983   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:17.739005   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:17.738940   58792 retry.go:31] will retry after 884.271908ms: waiting for machine to come up
	I0815 01:14:18.625229   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:18.625663   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:18.625686   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:18.625636   58792 retry.go:31] will retry after 1.158797398s: waiting for machine to come up
	I0815 01:14:19.785966   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:19.786524   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:19.786554   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:19.786460   58792 retry.go:31] will retry after 1.61496354s: waiting for machine to come up
	I0815 01:14:21.402572   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:21.403005   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:21.403036   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:21.402954   58792 retry.go:31] will retry after 1.502038716s: waiting for machine to come up
	I0815 01:14:22.906171   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:22.906644   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:22.906668   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:22.906600   58792 retry.go:31] will retry after 2.774386974s: waiting for machine to come up
	I0815 01:14:25.682341   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:25.682894   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:25.682915   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:25.682842   58792 retry.go:31] will retry after 2.702585994s: waiting for machine to come up
	I0815 01:14:28.387496   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:28.388036   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:28.388067   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:28.387992   58792 retry.go:31] will retry after 3.389985972s: waiting for machine to come up
	I0815 01:14:31.779383   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:31.779890   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:14:31.779922   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:14:31.779817   58792 retry.go:31] will retry after 3.538006878s: waiting for machine to come up
	I0815 01:14:35.320224   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.320767   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Found IP for machine: 192.168.72.130
	I0815 01:14:35.320790   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Reserving static IP address...
	I0815 01:14:35.320814   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has current primary IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.321262   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-146394", mac: "52:54:00:c0:3a:c8", ip: "192.168.72.130"} in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.397548   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Getting to WaitForSSH function...
	I0815 01:14:35.397581   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Reserved static IP address: 192.168.72.130
	I0815 01:14:35.397621   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Waiting for SSH to be available...
	I0815 01:14:35.400175   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.400561   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:35.400591   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.400711   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Using SSH client type: external
	I0815 01:14:35.400729   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa (-rw-------)
	I0815 01:14:35.400756   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:14:35.400769   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | About to run SSH command:
	I0815 01:14:35.400786   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | exit 0
	I0815 01:14:35.528492   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | SSH cmd err, output: <nil>: 
	I0815 01:14:35.528793   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) KVM machine creation complete!
	I0815 01:14:35.529070   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetConfigRaw
	I0815 01:14:35.529680   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:14:35.529877   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:14:35.530031   58243 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 01:14:35.530047   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetState
	I0815 01:14:35.531462   58243 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 01:14:35.531474   58243 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 01:14:35.531480   58243 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 01:14:35.531501   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:14:35.533621   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.533989   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:35.534018   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.534209   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:14:35.534374   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:35.534549   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:35.534721   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:14:35.534890   58243 main.go:141] libmachine: Using SSH client type: native
	I0815 01:14:35.535124   58243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:14:35.535139   58243 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 01:14:35.647766   58243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:14:35.647796   58243 main.go:141] libmachine: Detecting the provisioner...
	I0815 01:14:35.647810   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:14:35.650685   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.651072   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:35.651099   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.651207   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:14:35.651398   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:35.651559   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:35.651684   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:14:35.651818   58243 main.go:141] libmachine: Using SSH client type: native
	I0815 01:14:35.651991   58243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:14:35.652005   58243 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 01:14:35.764868   58243 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 01:14:35.764953   58243 main.go:141] libmachine: found compatible host: buildroot
	I0815 01:14:35.764963   58243 main.go:141] libmachine: Provisioning with buildroot...
	I0815 01:14:35.764970   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetMachineName
	I0815 01:14:35.765215   58243 buildroot.go:166] provisioning hostname "kubernetes-upgrade-146394"
	I0815 01:14:35.765246   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetMachineName
	I0815 01:14:35.765439   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:14:35.768319   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.768670   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:35.768703   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.768809   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:14:35.768993   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:35.769158   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:35.769302   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:14:35.769464   58243 main.go:141] libmachine: Using SSH client type: native
	I0815 01:14:35.769635   58243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:14:35.769647   58243 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-146394 && echo "kubernetes-upgrade-146394" | sudo tee /etc/hostname
	I0815 01:14:35.902653   58243 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-146394
	
	I0815 01:14:35.902694   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:14:35.905753   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.906124   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:35.906158   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:35.906408   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:14:35.906706   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:35.906901   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:35.907041   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:14:35.907185   58243 main.go:141] libmachine: Using SSH client type: native
	I0815 01:14:35.907354   58243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:14:35.907371   58243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-146394' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-146394/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-146394' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:14:36.028570   58243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:14:36.028603   58243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:14:36.028633   58243 buildroot.go:174] setting up certificates
	I0815 01:14:36.028644   58243 provision.go:84] configureAuth start
	I0815 01:14:36.028677   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetMachineName
	I0815 01:14:36.028929   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:14:36.031454   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.031848   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:36.031867   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.032052   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:14:36.034544   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.034942   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:36.034969   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.035094   58243 provision.go:143] copyHostCerts
	I0815 01:14:36.035205   58243 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:14:36.035223   58243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:14:36.035285   58243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:14:36.035381   58243 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:14:36.035390   58243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:14:36.035412   58243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:14:36.035467   58243 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:14:36.035480   58243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:14:36.035513   58243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:14:36.035615   58243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-146394 san=[127.0.0.1 192.168.72.130 kubernetes-upgrade-146394 localhost minikube]
	I0815 01:14:36.152316   58243 provision.go:177] copyRemoteCerts
	I0815 01:14:36.152373   58243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:14:36.152394   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:14:36.155294   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.155661   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:36.155694   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.155893   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:14:36.156091   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:36.156249   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:14:36.156390   58243 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa Username:docker}
	I0815 01:14:36.242879   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:14:36.266879   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0815 01:14:36.290677   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:14:36.315370   58243 provision.go:87] duration metric: took 286.715818ms to configureAuth
	I0815 01:14:36.315402   58243 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:14:36.315595   58243 config.go:182] Loaded profile config "kubernetes-upgrade-146394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:14:36.315681   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:14:36.318092   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.318455   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:36.318486   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.318707   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:14:36.318947   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:36.319144   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:36.319280   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:14:36.319477   58243 main.go:141] libmachine: Using SSH client type: native
	I0815 01:14:36.319691   58243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:14:36.319703   58243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:14:36.596337   58243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:14:36.596370   58243 main.go:141] libmachine: Checking connection to Docker...
	I0815 01:14:36.596394   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetURL
	I0815 01:14:36.597944   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Using libvirt version 6000000
	I0815 01:14:36.600816   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.601214   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:36.601245   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.601487   58243 main.go:141] libmachine: Docker is up and running!
	I0815 01:14:36.601501   58243 main.go:141] libmachine: Reticulating splines...
	I0815 01:14:36.601508   58243 client.go:171] duration metric: took 23.480735585s to LocalClient.Create
	I0815 01:14:36.601530   58243 start.go:167] duration metric: took 23.480795358s to libmachine.API.Create "kubernetes-upgrade-146394"
	I0815 01:14:36.601536   58243 start.go:293] postStartSetup for "kubernetes-upgrade-146394" (driver="kvm2")
	I0815 01:14:36.601558   58243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:14:36.601577   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:14:36.601857   58243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:14:36.601887   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:14:36.604277   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.604693   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:36.604730   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.604877   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:14:36.605080   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:36.605258   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:14:36.605432   58243 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa Username:docker}
	I0815 01:14:36.695546   58243 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:14:36.699624   58243 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:14:36.699654   58243 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:14:36.699730   58243 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:14:36.699835   58243 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:14:36.699953   58243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:14:36.709456   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:14:36.732072   58243 start.go:296] duration metric: took 130.524238ms for postStartSetup
	I0815 01:14:36.732117   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetConfigRaw
	I0815 01:14:36.732772   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:14:36.735354   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.735792   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:36.735823   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.736058   58243 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/config.json ...
	I0815 01:14:36.736290   58243 start.go:128] duration metric: took 23.638689917s to createHost
	I0815 01:14:36.736321   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:14:36.738780   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.739190   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:36.739221   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.739346   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:14:36.739557   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:36.739717   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:36.739855   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:14:36.740043   58243 main.go:141] libmachine: Using SSH client type: native
	I0815 01:14:36.740274   58243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:14:36.740293   58243 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 01:14:36.857684   58243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723684476.834656069
	
	I0815 01:14:36.857716   58243 fix.go:216] guest clock: 1723684476.834656069
	I0815 01:14:36.857729   58243 fix.go:229] Guest: 2024-08-15 01:14:36.834656069 +0000 UTC Remote: 2024-08-15 01:14:36.736307503 +0000 UTC m=+58.396681536 (delta=98.348566ms)
	I0815 01:14:36.857758   58243 fix.go:200] guest clock delta is within tolerance: 98.348566ms
	I0815 01:14:36.857766   58243 start.go:83] releasing machines lock for "kubernetes-upgrade-146394", held for 23.760346007s
	I0815 01:14:36.857803   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:14:36.858108   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:14:36.861064   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.861494   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:36.861526   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.861785   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:14:36.862250   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:14:36.862437   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:14:36.862565   58243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:14:36.862611   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:14:36.862642   58243 ssh_runner.go:195] Run: cat /version.json
	I0815 01:14:36.862665   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:14:36.865831   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.866229   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.866265   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:36.866289   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.866589   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:14:36.866788   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:36.866902   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:36.866947   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:36.867016   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:14:36.867173   58243 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa Username:docker}
	I0815 01:14:36.867527   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:14:36.867678   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:14:36.867864   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:14:36.868026   58243 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa Username:docker}
	I0815 01:14:36.950679   58243 ssh_runner.go:195] Run: systemctl --version
	I0815 01:14:36.986803   58243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:14:37.161919   58243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:14:37.167926   58243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:14:37.168001   58243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:14:37.185091   58243 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:14:37.185119   58243 start.go:495] detecting cgroup driver to use...
	I0815 01:14:37.185179   58243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:14:37.205696   58243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:14:37.220240   58243 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:14:37.220311   58243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:14:37.233978   58243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:14:37.247411   58243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:14:37.377098   58243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:14:37.516115   58243 docker.go:233] disabling docker service ...
	I0815 01:14:37.516174   58243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:14:37.530806   58243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:14:37.544712   58243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:14:37.686362   58243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:14:37.847162   58243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:14:37.868451   58243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:14:37.889597   58243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 01:14:37.889664   58243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:14:37.903400   58243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:14:37.903474   58243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:14:37.917251   58243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:14:37.927685   58243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:14:37.938143   58243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:14:37.953232   58243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:14:37.967067   58243 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:14:37.967130   58243 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:14:37.981219   58243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:14:38.000541   58243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:14:38.141701   58243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:14:38.281901   58243 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:14:38.281993   58243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:14:38.286474   58243 start.go:563] Will wait 60s for crictl version
	I0815 01:14:38.286524   58243 ssh_runner.go:195] Run: which crictl
	I0815 01:14:38.290863   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:14:38.326011   58243 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:14:38.326095   58243 ssh_runner.go:195] Run: crio --version
	I0815 01:14:38.352820   58243 ssh_runner.go:195] Run: crio --version
	I0815 01:14:38.380611   58243 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 01:14:38.381823   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:14:38.384553   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:38.384976   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:14:27 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:14:38.385008   58243 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:14:38.385207   58243 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 01:14:38.389147   58243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:14:38.400644   58243 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:14:38.400804   58243 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 01:14:38.400884   58243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:14:38.431037   58243 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:14:38.431099   58243 ssh_runner.go:195] Run: which lz4
	I0815 01:14:38.435049   58243 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 01:14:38.439039   58243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:14:38.439068   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 01:14:39.864669   58243 crio.go:462] duration metric: took 1.429637099s to copy over tarball
	I0815 01:14:39.864739   58243 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:14:42.314494   58243 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.449731788s)
	I0815 01:14:42.314527   58243 crio.go:469] duration metric: took 2.449828072s to extract the tarball
	I0815 01:14:42.314536   58243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:14:42.355812   58243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:14:42.398570   58243 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:14:42.398590   58243 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:14:42.398658   58243 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:14:42.398681   58243 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:14:42.398698   58243 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:14:42.398736   58243 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:14:42.398760   58243 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 01:14:42.398936   58243 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:14:42.399019   58243 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:14:42.399028   58243 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 01:14:42.400363   58243 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:14:42.400376   58243 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:14:42.400365   58243 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:14:42.400414   58243 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:14:42.400417   58243 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:14:42.400402   58243 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 01:14:42.400444   58243 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 01:14:42.400850   58243 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:14:42.640959   58243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 01:14:42.659144   58243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 01:14:42.670134   58243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:14:42.690337   58243 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 01:14:42.690377   58243 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 01:14:42.690415   58243 ssh_runner.go:195] Run: which crictl
	I0815 01:14:42.701636   58243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:14:42.702740   58243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:14:42.706215   58243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:14:42.710255   58243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 01:14:42.728606   58243 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 01:14:42.728665   58243 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:14:42.728718   58243 ssh_runner.go:195] Run: which crictl
	I0815 01:14:42.740749   58243 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 01:14:42.740788   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:14:42.740802   58243 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:14:42.740847   58243 ssh_runner.go:195] Run: which crictl
	I0815 01:14:42.780807   58243 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 01:14:42.780862   58243 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:14:42.780917   58243 ssh_runner.go:195] Run: which crictl
	I0815 01:14:42.833727   58243 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 01:14:42.833772   58243 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:14:42.833827   58243 ssh_runner.go:195] Run: which crictl
	I0815 01:14:42.839559   58243 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 01:14:42.839604   58243 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:14:42.839658   58243 ssh_runner.go:195] Run: which crictl
	I0815 01:14:42.848079   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:14:42.848267   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:14:42.848267   58243 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 01:14:42.848344   58243 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 01:14:42.848354   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:14:42.848371   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:14:42.848406   58243 ssh_runner.go:195] Run: which crictl
	I0815 01:14:42.848414   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:14:42.848553   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:14:42.973669   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:14:42.973848   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:14:42.973958   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:14:42.984004   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:14:42.984064   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:14:42.984017   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:14:42.988824   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:14:43.130249   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:14:43.130264   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:14:43.130304   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:14:43.130343   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:14:43.137338   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:14:43.137393   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:14:43.137422   58243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 01:14:43.246469   58243 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:14:43.271585   58243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 01:14:43.271637   58243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 01:14:43.271704   58243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 01:14:43.271729   58243 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:14:43.283140   58243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 01:14:43.283178   58243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 01:14:43.425512   58243 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 01:14:43.425593   58243 cache_images.go:92] duration metric: took 1.02698949s to LoadCachedImages
	W0815 01:14:43.425707   58243 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0815 01:14:43.425728   58243 kubeadm.go:934] updating node { 192.168.72.130 8443 v1.20.0 crio true true} ...
	I0815 01:14:43.425854   58243 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-146394 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:14:43.425924   58243 ssh_runner.go:195] Run: crio config
	I0815 01:14:43.479920   58243 cni.go:84] Creating CNI manager for ""
	I0815 01:14:43.479940   58243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:14:43.479950   58243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:14:43.479967   58243 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-146394 NodeName:kubernetes-upgrade-146394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 01:14:43.480104   58243 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-146394"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:14:43.480164   58243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 01:14:43.490146   58243 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:14:43.490209   58243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:14:43.499444   58243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0815 01:14:43.518913   58243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:14:43.536854   58243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0815 01:14:43.555573   58243 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I0815 01:14:43.559222   58243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:14:43.570517   58243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:14:43.694962   58243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:14:43.713996   58243 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394 for IP: 192.168.72.130
	I0815 01:14:43.714018   58243 certs.go:194] generating shared ca certs ...
	I0815 01:14:43.714035   58243 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:14:43.714204   58243 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:14:43.714280   58243 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:14:43.714295   58243 certs.go:256] generating profile certs ...
	I0815 01:14:43.714363   58243 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.key
	I0815 01:14:43.714380   58243 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.crt with IP's: []
	I0815 01:14:43.819378   58243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.crt ...
	I0815 01:14:43.819412   58243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.crt: {Name:mkab7dc7fb491dc2e00b551ea637f670e222f7f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:14:43.844451   58243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.key ...
	I0815 01:14:43.844504   58243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.key: {Name:mk06a0291af2aa7fa984bc10bf856e5d616ca3cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:14:43.844687   58243 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.key.6a0a8e0c
	I0815 01:14:43.844711   58243 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.crt.6a0a8e0c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.130]
	I0815 01:14:44.259209   58243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.crt.6a0a8e0c ...
	I0815 01:14:44.259240   58243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.crt.6a0a8e0c: {Name:mka398817553c3b2d4fcf87772c4525a72e14e12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:14:44.259427   58243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.key.6a0a8e0c ...
	I0815 01:14:44.259448   58243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.key.6a0a8e0c: {Name:mkdf24fe76296ee8098930c428f866cf8d0951df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:14:44.259559   58243 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.crt.6a0a8e0c -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.crt
	I0815 01:14:44.259652   58243 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.key.6a0a8e0c -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.key
	I0815 01:14:44.259714   58243 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.key
	I0815 01:14:44.259726   58243 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.crt with IP's: []
	I0815 01:14:44.326740   58243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.crt ...
	I0815 01:14:44.326766   58243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.crt: {Name:mk54faeb419cfd0d07da0c8c1ff1f91ee14ad992 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:14:44.399676   58243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.key ...
	I0815 01:14:44.399738   58243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.key: {Name:mkc9efa14dbf5dd6bba5782ebd94efd3d5d74fec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:14:44.400048   58243 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:14:44.400102   58243 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:14:44.400128   58243 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:14:44.400165   58243 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:14:44.400201   58243 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:14:44.400236   58243 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:14:44.400318   58243 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:14:44.401357   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:14:44.425790   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:14:44.455492   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:14:44.477564   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:14:44.498972   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0815 01:14:44.522811   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:14:44.559523   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:14:44.599704   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:14:44.631849   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:14:44.656261   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:14:44.679236   58243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:14:44.701313   58243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:14:44.716931   58243 ssh_runner.go:195] Run: openssl version
	I0815 01:14:44.722449   58243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:14:44.731870   58243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:14:44.735859   58243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:14:44.735924   58243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:14:44.741329   58243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:14:44.751191   58243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:14:44.760786   58243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:14:44.764929   58243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:14:44.764978   58243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:14:44.770505   58243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:14:44.780237   58243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:14:44.790361   58243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:14:44.795485   58243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:14:44.795561   58243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:14:44.801044   58243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:14:44.810752   58243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:14:44.814338   58243 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 01:14:44.814393   58243 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:14:44.814467   58243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:14:44.814510   58243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:14:44.849180   58243 cri.go:89] found id: ""
	I0815 01:14:44.849259   58243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:14:44.858384   58243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:14:44.867219   58243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:14:44.875619   58243 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:14:44.875643   58243 kubeadm.go:157] found existing configuration files:
	
	I0815 01:14:44.875696   58243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:14:44.884288   58243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:14:44.884346   58243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:14:44.893241   58243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:14:44.902194   58243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:14:44.902243   58243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:14:44.911051   58243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:14:44.920050   58243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:14:44.920113   58243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:14:44.930102   58243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:14:44.938970   58243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:14:44.939016   58243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:14:44.948462   58243 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:14:45.055862   58243 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:14:45.055951   58243 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:14:45.193093   58243 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:14:45.193258   58243 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:14:45.193404   58243 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:14:45.413338   58243 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:14:45.445651   58243 out.go:204]   - Generating certificates and keys ...
	I0815 01:14:45.445796   58243 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:14:45.445890   58243 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:14:45.576255   58243 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 01:14:45.735135   58243 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 01:14:45.925859   58243 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 01:14:46.085170   58243 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 01:14:46.468015   58243 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 01:14:46.468428   58243 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-146394 localhost] and IPs [192.168.72.130 127.0.0.1 ::1]
	I0815 01:14:46.749358   58243 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 01:14:46.749920   58243 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-146394 localhost] and IPs [192.168.72.130 127.0.0.1 ::1]
	I0815 01:14:46.955072   58243 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 01:14:47.106085   58243 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 01:14:47.294993   58243 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 01:14:47.295126   58243 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:14:47.524310   58243 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:14:47.671484   58243 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:14:47.865382   58243 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:14:47.972987   58243 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:14:47.996363   58243 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:14:47.997134   58243 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:14:47.997194   58243 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:14:48.174693   58243 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:14:48.176248   58243 out.go:204]   - Booting up control plane ...
	I0815 01:14:48.176392   58243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:14:48.194841   58243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:14:48.196567   58243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:14:48.198136   58243 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:14:48.208416   58243 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:15:28.203709   58243 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:15:28.203802   58243 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:15:28.204087   58243 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:15:33.204384   58243 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:15:33.204700   58243 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:15:43.203795   58243 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:15:43.203977   58243 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:16:03.203444   58243 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:16:03.203708   58243 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:16:43.205089   58243 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:16:43.205614   58243 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:16:43.205634   58243 kubeadm.go:310] 
	I0815 01:16:43.205729   58243 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:16:43.205835   58243 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:16:43.205848   58243 kubeadm.go:310] 
	I0815 01:16:43.205930   58243 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:16:43.206000   58243 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:16:43.206272   58243 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:16:43.206283   58243 kubeadm.go:310] 
	I0815 01:16:43.206568   58243 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:16:43.206661   58243 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:16:43.206741   58243 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:16:43.206778   58243 kubeadm.go:310] 
	I0815 01:16:43.207102   58243 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:16:43.207293   58243 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:16:43.207316   58243 kubeadm.go:310] 
	I0815 01:16:43.207766   58243 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:16:43.207993   58243 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:16:43.208195   58243 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:16:43.208366   58243 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:16:43.208427   58243 kubeadm.go:310] 
	I0815 01:16:43.208915   58243 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:16:43.209016   58243 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:16:43.209178   58243 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0815 01:16:43.209240   58243 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-146394 localhost] and IPs [192.168.72.130 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-146394 localhost] and IPs [192.168.72.130 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-146394 localhost] and IPs [192.168.72.130 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-146394 localhost] and IPs [192.168.72.130 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 01:16:43.209298   58243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:16:43.872306   58243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:16:43.886105   58243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:16:43.896367   58243 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:16:43.896393   58243 kubeadm.go:157] found existing configuration files:
	
	I0815 01:16:43.896429   58243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:16:43.906306   58243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:16:43.906364   58243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:16:43.916604   58243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:16:43.925017   58243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:16:43.925068   58243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:16:43.934021   58243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:16:43.943893   58243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:16:43.943945   58243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:16:43.953276   58243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:16:43.965245   58243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:16:43.965292   58243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:16:43.977349   58243 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:16:44.045703   58243 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:16:44.045758   58243 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:16:44.189896   58243 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:16:44.190012   58243 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:16:44.190122   58243 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:16:44.371670   58243 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:16:44.373792   58243 out.go:204]   - Generating certificates and keys ...
	I0815 01:16:44.373910   58243 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:16:44.374012   58243 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:16:44.374134   58243 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:16:44.374226   58243 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:16:44.374617   58243 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:16:44.374763   58243 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:16:44.376603   58243 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:16:44.376975   58243 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:16:44.377471   58243 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:16:44.377822   58243 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:16:44.377938   58243 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:16:44.378005   58243 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:16:44.612817   58243 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:16:44.763436   58243 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:16:45.086344   58243 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:16:45.440217   58243 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:16:45.463373   58243 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:16:45.463504   58243 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:16:45.463563   58243 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:16:45.617742   58243 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:16:45.619646   58243 out.go:204]   - Booting up control plane ...
	I0815 01:16:45.619801   58243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:16:45.628177   58243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:16:45.628856   58243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:16:45.629689   58243 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:16:45.641870   58243 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:17:25.644908   58243 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:17:25.645021   58243 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:17:25.645353   58243 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:17:30.645734   58243 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:17:30.645985   58243 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:17:40.646762   58243 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:17:40.647046   58243 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:18:00.645917   58243 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:18:00.646222   58243 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:18:40.646465   58243 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:18:40.646797   58243 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:18:40.646830   58243 kubeadm.go:310] 
	I0815 01:18:40.646903   58243 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:18:40.646973   58243 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:18:40.646993   58243 kubeadm.go:310] 
	I0815 01:18:40.647051   58243 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:18:40.647099   58243 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:18:40.647241   58243 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:18:40.647251   58243 kubeadm.go:310] 
	I0815 01:18:40.647402   58243 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:18:40.647466   58243 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:18:40.647514   58243 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:18:40.647532   58243 kubeadm.go:310] 
	I0815 01:18:40.647683   58243 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:18:40.647769   58243 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:18:40.647822   58243 kubeadm.go:310] 
	I0815 01:18:40.647966   58243 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:18:40.648103   58243 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:18:40.648232   58243 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:18:40.648316   58243 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:18:40.648324   58243 kubeadm.go:310] 
	I0815 01:18:40.648689   58243 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:18:40.648803   58243 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:18:40.648895   58243 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:18:40.648983   58243 kubeadm.go:394] duration metric: took 3m55.834593058s to StartCluster
	I0815 01:18:40.649028   58243 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:18:40.649097   58243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:18:40.701526   58243 cri.go:89] found id: ""
	I0815 01:18:40.701555   58243 logs.go:276] 0 containers: []
	W0815 01:18:40.701562   58243 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:18:40.701572   58243 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:18:40.701639   58243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:18:40.736609   58243 cri.go:89] found id: ""
	I0815 01:18:40.736636   58243 logs.go:276] 0 containers: []
	W0815 01:18:40.736647   58243 logs.go:278] No container was found matching "etcd"
	I0815 01:18:40.736670   58243 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:18:40.736735   58243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:18:40.776593   58243 cri.go:89] found id: ""
	I0815 01:18:40.776622   58243 logs.go:276] 0 containers: []
	W0815 01:18:40.776632   58243 logs.go:278] No container was found matching "coredns"
	I0815 01:18:40.776638   58243 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:18:40.776713   58243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:18:40.816707   58243 cri.go:89] found id: ""
	I0815 01:18:40.816741   58243 logs.go:276] 0 containers: []
	W0815 01:18:40.816753   58243 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:18:40.816761   58243 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:18:40.816826   58243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:18:40.876518   58243 cri.go:89] found id: ""
	I0815 01:18:40.876547   58243 logs.go:276] 0 containers: []
	W0815 01:18:40.876557   58243 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:18:40.876564   58243 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:18:40.876633   58243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:18:40.912994   58243 cri.go:89] found id: ""
	I0815 01:18:40.913026   58243 logs.go:276] 0 containers: []
	W0815 01:18:40.913036   58243 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:18:40.913044   58243 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:18:40.913109   58243 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:18:40.947906   58243 cri.go:89] found id: ""
	I0815 01:18:40.947937   58243 logs.go:276] 0 containers: []
	W0815 01:18:40.947948   58243 logs.go:278] No container was found matching "kindnet"
	I0815 01:18:40.947959   58243 logs.go:123] Gathering logs for kubelet ...
	I0815 01:18:40.947975   58243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:18:40.999358   58243 logs.go:123] Gathering logs for dmesg ...
	I0815 01:18:40.999402   58243 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:18:41.015040   58243 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:18:41.015070   58243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:18:41.137280   58243 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:18:41.137303   58243 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:18:41.137317   58243 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:18:41.254583   58243 logs.go:123] Gathering logs for container status ...
	I0815 01:18:41.254623   58243 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0815 01:18:41.292929   58243 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 01:18:41.292971   58243 out.go:239] * 
	* 
	W0815 01:18:41.293024   58243 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:18:41.293046   58243 out.go:239] * 
	* 
	W0815 01:18:41.293915   58243 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:18:41.296938   58243 out.go:177] 
	W0815 01:18:41.298146   58243 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:18:41.298210   58243 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 01:18:41.298243   58243 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 01:18:41.299654   58243 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-146394 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-146394
E0815 01:18:45.641026   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-146394: (6.347329865s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-146394 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-146394 status --format={{.Host}}: exit status 7 (75.3214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-146394 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-146394 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.923145222s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-146394 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-146394 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-146394 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (98.359824ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-146394] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-146394
	    minikube start -p kubernetes-upgrade-146394 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1463942 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-146394 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-146394 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0815 01:19:41.522846   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-146394 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.445889599s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-15 01:20:25.351320391 +0000 UTC m=+4500.051551983
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-146394 -n kubernetes-upgrade-146394
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-146394 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-146394 logs -n 25: (1.788953428s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-284326             | stopped-upgrade-284326    | jenkins | v1.33.1 | 15 Aug 24 01:15 UTC | 15 Aug 24 01:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	| start   | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-339919             | running-upgrade-339919    | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	| start   | -p cert-expiration-131152             | cert-expiration-131152    | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-284326             | stopped-upgrade-284326    | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	| start   | -p force-systemd-flag-221548          | force-systemd-flag-221548 | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:17 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-312183 sudo           | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	| start   | -p pause-064537 --memory=2048         | pause-064537              | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:18 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-221548 ssh cat     | force-systemd-flag-221548 | jenkins | v1.33.1 | 15 Aug 24 01:17 UTC | 15 Aug 24 01:17 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-221548          | force-systemd-flag-221548 | jenkins | v1.33.1 | 15 Aug 24 01:17 UTC | 15 Aug 24 01:17 UTC |
	| start   | -p cert-options-411164                | cert-options-411164       | jenkins | v1.33.1 | 15 Aug 24 01:17 UTC | 15 Aug 24 01:18 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-411164 ssh               | cert-options-411164       | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:18 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-411164 -- sudo        | cert-options-411164       | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:18 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-411164                | cert-options-411164       | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:18 UTC |
	| start   | -p old-k8s-version-390782             | old-k8s-version-390782    | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p pause-064537                       | pause-064537              | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:19 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-146394          | kubernetes-upgrade-146394 | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:18 UTC |
	| start   | -p kubernetes-upgrade-146394          | kubernetes-upgrade-146394 | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:19 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-064537                       | pause-064537              | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:19 UTC |
	| start   | -p no-preload-884893                  | no-preload-884893         | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394          | kubernetes-upgrade-146394 | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394          | kubernetes-upgrade-146394 | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:20 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-131152             | cert-expiration-131152    | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:20:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:20:02.917372   64249 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:20:02.917613   64249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:20:02.917616   64249 out.go:304] Setting ErrFile to fd 2...
	I0815 01:20:02.917620   64249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:20:02.917808   64249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:20:02.918323   64249 out.go:298] Setting JSON to false
	I0815 01:20:02.919226   64249 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7348,"bootTime":1723677455,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:20:02.919272   64249 start.go:139] virtualization: kvm guest
	I0815 01:20:02.921470   64249 out.go:177] * [cert-expiration-131152] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:20:02.922782   64249 notify.go:220] Checking for updates...
	I0815 01:20:02.922802   64249 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:20:02.924258   64249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:20:02.925511   64249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:20:02.926746   64249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:20:02.928007   64249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:20:02.929237   64249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:20:00.325425   64020 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:20:00.328514   64020 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:20:00.329013   64020 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:20:00.329042   64020 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:20:00.329362   64020 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 01:20:00.333792   64020 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:20:00.333955   64020 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:20:00.334026   64020 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:20:00.385126   64020 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:20:00.385154   64020 crio.go:433] Images already preloaded, skipping extraction
	I0815 01:20:00.385213   64020 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:20:00.421459   64020 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:20:00.421480   64020 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:20:00.421486   64020 kubeadm.go:934] updating node { 192.168.72.130 8443 v1.31.0 crio true true} ...
	I0815 01:20:00.421601   64020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-146394 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:20:00.421684   64020 ssh_runner.go:195] Run: crio config
	I0815 01:20:00.465025   64020 cni.go:84] Creating CNI manager for ""
	I0815 01:20:00.465051   64020 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:20:00.465069   64020 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:20:00.465096   64020 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-146394 NodeName:kubernetes-upgrade-146394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:20:00.465262   64020 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-146394"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:20:00.465334   64020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:20:00.477121   64020 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:20:00.477189   64020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:20:00.487407   64020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0815 01:20:00.503855   64020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:20:00.520606   64020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0815 01:20:00.537567   64020 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I0815 01:20:00.541684   64020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:20:00.680516   64020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:20:00.707578   64020 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394 for IP: 192.168.72.130
	I0815 01:20:00.707609   64020 certs.go:194] generating shared ca certs ...
	I0815 01:20:00.707636   64020 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:20:00.707820   64020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:20:00.707882   64020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:20:00.707896   64020 certs.go:256] generating profile certs ...
	I0815 01:20:00.708016   64020 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.key
	I0815 01:20:00.708091   64020 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.key.6a0a8e0c
	I0815 01:20:00.708144   64020 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.key
	I0815 01:20:00.708320   64020 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:20:00.708366   64020 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:20:00.708378   64020 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:20:00.708430   64020 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:20:00.708465   64020 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:20:00.708497   64020 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:20:00.708559   64020 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:20:00.709513   64020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:20:00.760892   64020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:20:00.797202   64020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:20:00.826398   64020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:20:00.859064   64020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0815 01:20:00.886249   64020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:20:00.941432   64020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:20:00.988991   64020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:20:01.019460   64020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:20:01.104601   64020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:20:01.139983   64020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:20:01.176899   64020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:20:01.224332   64020 ssh_runner.go:195] Run: openssl version
	I0815 01:20:01.234852   64020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:20:01.268192   64020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:20:01.280287   64020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:20:01.280360   64020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:20:01.295828   64020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:20:01.312440   64020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:20:01.338644   64020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:20:01.356335   64020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:20:01.356406   64020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:20:01.378608   64020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:20:01.403629   64020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:20:01.426307   64020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:20:01.441600   64020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:20:01.441666   64020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:20:01.457634   64020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:20:01.475874   64020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:20:01.481947   64020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:20:01.488075   64020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:20:01.500353   64020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:20:01.510554   64020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:20:01.517179   64020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:20:01.522568   64020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:20:01.528920   64020 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:20:01.529039   64020 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:20:01.529109   64020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:20:01.613268   64020 cri.go:89] found id: "716b49e3e43bc1248e7b3c9f112a3f2cdb867a77d18a6120de9ec45d021b2c27"
	I0815 01:20:01.613298   64020 cri.go:89] found id: "70346e758ad31fbcb57743f9798f48dbd5413841922da8661037be22b086c911"
	I0815 01:20:01.613304   64020 cri.go:89] found id: "70c233e21e0862f8ea6cc51bcf563a091b56c605cfd90df15ce4c882a1984620"
	I0815 01:20:01.613310   64020 cri.go:89] found id: "22c9e4f8a0fd6dc8478d86bdaf0ca62350ebd82f9baeb18d0e6ff84feb0ba0fa"
	I0815 01:20:01.613317   64020 cri.go:89] found id: "c03e43d57559fd43b0619f9a84f6582d39d52e6825582618e7ec5db04d5906ed"
	I0815 01:20:01.613321   64020 cri.go:89] found id: "b36d86c801ad3cc9f47101dc155f0b915d9b752ebfff61a73988111bd3b9cb29"
	I0815 01:20:01.613325   64020 cri.go:89] found id: "0f1fd55bb1996be59c8cfccaaade995f0c6d9b1ec2e2b6504bdd095273fda87d"
	I0815 01:20:01.613328   64020 cri.go:89] found id: "8d8814a487efefb6ee6ac2c433b993d49a3e2fce10d33985b5843dac44e507f8"
	I0815 01:20:01.613332   64020 cri.go:89] found id: "0d95e232348770918375a24e603d32bc770ac2219f62faa26f58a69c54fded4a"
	I0815 01:20:01.613341   64020 cri.go:89] found id: "28f55b2bc46fe41a16da394e30326ab7982cdfbeaaca4a4783913aeb02bfe964"
	I0815 01:20:01.613345   64020 cri.go:89] found id: "cd635a8db91503fb4ecb15b7b696329786d053ec48e2c0317cc9f87b1b1779ae"
	I0815 01:20:01.613349   64020 cri.go:89] found id: "309ce11b74cdd8042b299763e4f26735fd50a11d955266f55a19890df48a2fc3"
	I0815 01:20:01.613353   64020 cri.go:89] found id: ""
	I0815 01:20:01.613406   64020 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.104325581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684826104290518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05348047-fa17-4f8f-a7e5-a313d0dd1b97 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.105624174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5a0a6ce-5289-4687-961e-7d0f8a3e630b name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.105689645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5a0a6ce-5289-4687-961e-7d0f8a3e630b name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.106119725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77039ecebf346f6abb828c67e596811b47c86f46f3f162714d84ef75bd028497,PodSandboxId:67044aba1f5d6fa3d75e5aa16c76144f0406175703876c2c7bf89e265f2b6f8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684822973670610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-s5rfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cadf49b-a729-45fb-8b89-1409245d81fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e6c7315c27571b2edafaff4c73a6aa8480895a2ad0d91dcead59effcd11fe0,PodSandboxId:0153e25f9901a55ad8d35e714d1b8c3c4cc66dd58e607e6fba90bfb309ca38c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684822992451604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5g6xg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: fc9787db-ba23-48c7-9133-b7e5d10963c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe924b9c2632775aecf767467d680766ba2d1de53c9753be754d094b93a28824,PodSandboxId:5ae96a965fba9e961039179d6d97512e1585087d751d082c9ffcc85519a033b8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1723684822983853008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26afacdc-c874-4017-b5d3-deccebeb4f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f104ad47ebe5299cd6349726a25b85f57b5ecb35141dac1f301fe6a0af18b99,PodSandboxId:6b5ecd2fe325d66bc68b23e5871ad981dbbeef6b66dc0b3f6c34ec2abc230fb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAI
NER_RUNNING,CreatedAt:1723684820121481463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ef6e2490856765cf874798c277eb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c654138c7a1592314839c2eb8251d6dab60fba4c0be8b9a3c36556b0b37f31a,PodSandboxId:34294112a6ed3517e7dbb5bb388ee3e5d34f743fabf1c5c69090802a66045014,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
,State:CONTAINER_RUNNING,CreatedAt:1723684820107683097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41ac7df1d512cf15bda3354bb1cebbe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f917c798aaf8c950e5702f3bd30f4a7e0d18d28d2bb9e674f5a84b952e7f11,PodSandboxId:7dfb899c54bcdc18e81f5b92b23fd7492171ef9fe5a0d22b27c13cb441c340d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CO
NTAINER_RUNNING,CreatedAt:1723684810441279130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8r7r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c330db40-f081-4df7-b1a0-bf67217b2944,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4237914e00cb848858a24561922ce8845496fa51f0352dc06421755d43eee4,PodSandboxId:5ae96a965fba9e961039179d6d97512e1585087d751d082c9ffcc85519a033b8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:
1723684809471780153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26afacdc-c874-4017-b5d3-deccebeb4f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f54ac9a226ed4be7532fd0b0af18fe73e75862275b6bf8ede7e7d474f854f840,PodSandboxId:3d2607f30d7c516fb94da386c104cad8fcfc132fe6b7ddbfe5d87788847dd1ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723684804947403721,Labels:
map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2494acec48842a3f829bc787759928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:773b653a371bda4173cf3e8b1df8496804db22d092b454dc6cfd2f48be9afda0,PodSandboxId:f5f40a9ffa55ed66f6f8981ad594962d98cb283e595371a24989f2955d3bdd68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723684801949567973,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017ef7126df3f9037866256c2d7ab349,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716b49e3e43bc1248e7b3c9f112a3f2cdb867a77d18a6120de9ec45d021b2c27,PodSandboxId:6b5ecd2fe325d66bc68b23e5871ad981dbbeef6b66dc0b3f6c34ec2abc230fb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723684801352030634,Labels:map[string]string{io.k
ubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ef6e2490856765cf874798c277eb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70346e758ad31fbcb57743f9798f48dbd5413841922da8661037be22b086c911,PodSandboxId:34294112a6ed3517e7dbb5bb388ee3e5d34f743fabf1c5c69090802a66045014,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723684801268578708,Labels:map[string
]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41ac7df1d512cf15bda3354bb1cebbe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c233e21e0862f8ea6cc51bcf563a091b56c605cfd90df15ce4c882a1984620,PodSandboxId:67044aba1f5d6fa3d75e5aa16c76144f0406175703876c2c7bf89e265f2b6f8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684801076387207,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-s5rfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cadf49b-a729-45fb-8b89-1409245d81fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c9e4f8a0fd6dc8478d86bdaf0ca62350ebd82f9baeb18d0e6ff84feb0ba0fa,PodSandboxId:0153e25f9901a55ad8d35e714d1b8c3c4cc66dd58e607e6fba90bfb309ca38c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684800880192223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5g6xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc9787db-ba23-48c7-9133-b7e5d10963c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8814a487efefb6ee6ac2c433b993d49a3e2fce10d33985b5843dac44e507f8,PodSandboxId:74577d667415bbefc1d5bfd2ac0fccb86128acbcacd27
368f7f130b96f345c59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723684768875118186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8r7r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c330db40-f081-4df7-b1a0-bf67217b2944,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f55b2bc46fe41a16da394e30326ab7982cdfbeaaca4a4783913aeb02bfe964,PodSandboxId:62e98ada55c4f82102dafeecdefddd567629828dbec591a26b47e037787c1f14,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723684758402287810,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2494acec48842a3f829bc787759928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d95e232348770918375a24e603d32bc770ac2219f62faa26f58a69c54fded4a,PodSandboxId:b239e67f4e10985bf1ec15cf4f8a1378bdbd6c4098e820d1ce297e4faf724f22,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723684758407212182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017ef7126df3f9037866256c2d7ab349,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5a0a6ce-5289-4687-961e-7d0f8a3e630b name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.161665173Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f72ac17f-1b76-4f5d-803c-e80a060bacf6 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.161772883Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f72ac17f-1b76-4f5d-803c-e80a060bacf6 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.162756950Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d90fa771-c769-44a5-bc53-c3fa44562e0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.163249434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684826163219689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d90fa771-c769-44a5-bc53-c3fa44562e0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.163753432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10118d2b-698c-4610-b333-23c0ee9985df name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.163807942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10118d2b-698c-4610-b333-23c0ee9985df name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.164174540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77039ecebf346f6abb828c67e596811b47c86f46f3f162714d84ef75bd028497,PodSandboxId:67044aba1f5d6fa3d75e5aa16c76144f0406175703876c2c7bf89e265f2b6f8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684822973670610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-s5rfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cadf49b-a729-45fb-8b89-1409245d81fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e6c7315c27571b2edafaff4c73a6aa8480895a2ad0d91dcead59effcd11fe0,PodSandboxId:0153e25f9901a55ad8d35e714d1b8c3c4cc66dd58e607e6fba90bfb309ca38c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684822992451604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5g6xg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: fc9787db-ba23-48c7-9133-b7e5d10963c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe924b9c2632775aecf767467d680766ba2d1de53c9753be754d094b93a28824,PodSandboxId:5ae96a965fba9e961039179d6d97512e1585087d751d082c9ffcc85519a033b8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1723684822983853008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26afacdc-c874-4017-b5d3-deccebeb4f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f104ad47ebe5299cd6349726a25b85f57b5ecb35141dac1f301fe6a0af18b99,PodSandboxId:6b5ecd2fe325d66bc68b23e5871ad981dbbeef6b66dc0b3f6c34ec2abc230fb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAI
NER_RUNNING,CreatedAt:1723684820121481463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ef6e2490856765cf874798c277eb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c654138c7a1592314839c2eb8251d6dab60fba4c0be8b9a3c36556b0b37f31a,PodSandboxId:34294112a6ed3517e7dbb5bb388ee3e5d34f743fabf1c5c69090802a66045014,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
,State:CONTAINER_RUNNING,CreatedAt:1723684820107683097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41ac7df1d512cf15bda3354bb1cebbe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f917c798aaf8c950e5702f3bd30f4a7e0d18d28d2bb9e674f5a84b952e7f11,PodSandboxId:7dfb899c54bcdc18e81f5b92b23fd7492171ef9fe5a0d22b27c13cb441c340d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CO
NTAINER_RUNNING,CreatedAt:1723684810441279130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8r7r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c330db40-f081-4df7-b1a0-bf67217b2944,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4237914e00cb848858a24561922ce8845496fa51f0352dc06421755d43eee4,PodSandboxId:5ae96a965fba9e961039179d6d97512e1585087d751d082c9ffcc85519a033b8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:
1723684809471780153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26afacdc-c874-4017-b5d3-deccebeb4f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f54ac9a226ed4be7532fd0b0af18fe73e75862275b6bf8ede7e7d474f854f840,PodSandboxId:3d2607f30d7c516fb94da386c104cad8fcfc132fe6b7ddbfe5d87788847dd1ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723684804947403721,Labels:
map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2494acec48842a3f829bc787759928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:773b653a371bda4173cf3e8b1df8496804db22d092b454dc6cfd2f48be9afda0,PodSandboxId:f5f40a9ffa55ed66f6f8981ad594962d98cb283e595371a24989f2955d3bdd68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723684801949567973,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017ef7126df3f9037866256c2d7ab349,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716b49e3e43bc1248e7b3c9f112a3f2cdb867a77d18a6120de9ec45d021b2c27,PodSandboxId:6b5ecd2fe325d66bc68b23e5871ad981dbbeef6b66dc0b3f6c34ec2abc230fb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723684801352030634,Labels:map[string]string{io.k
ubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ef6e2490856765cf874798c277eb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70346e758ad31fbcb57743f9798f48dbd5413841922da8661037be22b086c911,PodSandboxId:34294112a6ed3517e7dbb5bb388ee3e5d34f743fabf1c5c69090802a66045014,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723684801268578708,Labels:map[string
]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41ac7df1d512cf15bda3354bb1cebbe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c233e21e0862f8ea6cc51bcf563a091b56c605cfd90df15ce4c882a1984620,PodSandboxId:67044aba1f5d6fa3d75e5aa16c76144f0406175703876c2c7bf89e265f2b6f8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684801076387207,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-s5rfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cadf49b-a729-45fb-8b89-1409245d81fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c9e4f8a0fd6dc8478d86bdaf0ca62350ebd82f9baeb18d0e6ff84feb0ba0fa,PodSandboxId:0153e25f9901a55ad8d35e714d1b8c3c4cc66dd58e607e6fba90bfb309ca38c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684800880192223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5g6xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc9787db-ba23-48c7-9133-b7e5d10963c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8814a487efefb6ee6ac2c433b993d49a3e2fce10d33985b5843dac44e507f8,PodSandboxId:74577d667415bbefc1d5bfd2ac0fccb86128acbcacd27
368f7f130b96f345c59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723684768875118186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8r7r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c330db40-f081-4df7-b1a0-bf67217b2944,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f55b2bc46fe41a16da394e30326ab7982cdfbeaaca4a4783913aeb02bfe964,PodSandboxId:62e98ada55c4f82102dafeecdefddd567629828dbec591a26b47e037787c1f14,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723684758402287810,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2494acec48842a3f829bc787759928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d95e232348770918375a24e603d32bc770ac2219f62faa26f58a69c54fded4a,PodSandboxId:b239e67f4e10985bf1ec15cf4f8a1378bdbd6c4098e820d1ce297e4faf724f22,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723684758407212182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017ef7126df3f9037866256c2d7ab349,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10118d2b-698c-4610-b333-23c0ee9985df name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.223676154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b7befcc-255f-4791-9666-8e717f1eaa0b name=/runtime.v1.RuntimeService/Version
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.223760414Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b7befcc-255f-4791-9666-8e717f1eaa0b name=/runtime.v1.RuntimeService/Version
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.225366493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=32703aea-8156-4a13-ac5d-b2939f3ce0d6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.225908540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684826225870142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32703aea-8156-4a13-ac5d-b2939f3ce0d6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.226814475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0fe89f4-3abe-42b7-9d9b-3ae54b3b42a4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.226915982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0fe89f4-3abe-42b7-9d9b-3ae54b3b42a4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.227511222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77039ecebf346f6abb828c67e596811b47c86f46f3f162714d84ef75bd028497,PodSandboxId:67044aba1f5d6fa3d75e5aa16c76144f0406175703876c2c7bf89e265f2b6f8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684822973670610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-s5rfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cadf49b-a729-45fb-8b89-1409245d81fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e6c7315c27571b2edafaff4c73a6aa8480895a2ad0d91dcead59effcd11fe0,PodSandboxId:0153e25f9901a55ad8d35e714d1b8c3c4cc66dd58e607e6fba90bfb309ca38c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684822992451604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5g6xg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: fc9787db-ba23-48c7-9133-b7e5d10963c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe924b9c2632775aecf767467d680766ba2d1de53c9753be754d094b93a28824,PodSandboxId:5ae96a965fba9e961039179d6d97512e1585087d751d082c9ffcc85519a033b8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1723684822983853008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26afacdc-c874-4017-b5d3-deccebeb4f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f104ad47ebe5299cd6349726a25b85f57b5ecb35141dac1f301fe6a0af18b99,PodSandboxId:6b5ecd2fe325d66bc68b23e5871ad981dbbeef6b66dc0b3f6c34ec2abc230fb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAI
NER_RUNNING,CreatedAt:1723684820121481463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ef6e2490856765cf874798c277eb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c654138c7a1592314839c2eb8251d6dab60fba4c0be8b9a3c36556b0b37f31a,PodSandboxId:34294112a6ed3517e7dbb5bb388ee3e5d34f743fabf1c5c69090802a66045014,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
,State:CONTAINER_RUNNING,CreatedAt:1723684820107683097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41ac7df1d512cf15bda3354bb1cebbe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f917c798aaf8c950e5702f3bd30f4a7e0d18d28d2bb9e674f5a84b952e7f11,PodSandboxId:7dfb899c54bcdc18e81f5b92b23fd7492171ef9fe5a0d22b27c13cb441c340d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CO
NTAINER_RUNNING,CreatedAt:1723684810441279130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8r7r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c330db40-f081-4df7-b1a0-bf67217b2944,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4237914e00cb848858a24561922ce8845496fa51f0352dc06421755d43eee4,PodSandboxId:5ae96a965fba9e961039179d6d97512e1585087d751d082c9ffcc85519a033b8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:
1723684809471780153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26afacdc-c874-4017-b5d3-deccebeb4f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f54ac9a226ed4be7532fd0b0af18fe73e75862275b6bf8ede7e7d474f854f840,PodSandboxId:3d2607f30d7c516fb94da386c104cad8fcfc132fe6b7ddbfe5d87788847dd1ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723684804947403721,Labels:
map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2494acec48842a3f829bc787759928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:773b653a371bda4173cf3e8b1df8496804db22d092b454dc6cfd2f48be9afda0,PodSandboxId:f5f40a9ffa55ed66f6f8981ad594962d98cb283e595371a24989f2955d3bdd68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723684801949567973,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017ef7126df3f9037866256c2d7ab349,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716b49e3e43bc1248e7b3c9f112a3f2cdb867a77d18a6120de9ec45d021b2c27,PodSandboxId:6b5ecd2fe325d66bc68b23e5871ad981dbbeef6b66dc0b3f6c34ec2abc230fb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723684801352030634,Labels:map[string]string{io.k
ubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ef6e2490856765cf874798c277eb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70346e758ad31fbcb57743f9798f48dbd5413841922da8661037be22b086c911,PodSandboxId:34294112a6ed3517e7dbb5bb388ee3e5d34f743fabf1c5c69090802a66045014,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723684801268578708,Labels:map[string
]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41ac7df1d512cf15bda3354bb1cebbe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c233e21e0862f8ea6cc51bcf563a091b56c605cfd90df15ce4c882a1984620,PodSandboxId:67044aba1f5d6fa3d75e5aa16c76144f0406175703876c2c7bf89e265f2b6f8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684801076387207,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-s5rfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cadf49b-a729-45fb-8b89-1409245d81fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c9e4f8a0fd6dc8478d86bdaf0ca62350ebd82f9baeb18d0e6ff84feb0ba0fa,PodSandboxId:0153e25f9901a55ad8d35e714d1b8c3c4cc66dd58e607e6fba90bfb309ca38c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684800880192223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5g6xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc9787db-ba23-48c7-9133-b7e5d10963c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8814a487efefb6ee6ac2c433b993d49a3e2fce10d33985b5843dac44e507f8,PodSandboxId:74577d667415bbefc1d5bfd2ac0fccb86128acbcacd27
368f7f130b96f345c59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723684768875118186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8r7r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c330db40-f081-4df7-b1a0-bf67217b2944,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f55b2bc46fe41a16da394e30326ab7982cdfbeaaca4a4783913aeb02bfe964,PodSandboxId:62e98ada55c4f82102dafeecdefddd567629828dbec591a26b47e037787c1f14,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723684758402287810,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2494acec48842a3f829bc787759928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d95e232348770918375a24e603d32bc770ac2219f62faa26f58a69c54fded4a,PodSandboxId:b239e67f4e10985bf1ec15cf4f8a1378bdbd6c4098e820d1ce297e4faf724f22,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723684758407212182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017ef7126df3f9037866256c2d7ab349,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0fe89f4-3abe-42b7-9d9b-3ae54b3b42a4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.272530655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5108821d-0cae-4b9d-be33-a5988bb0969e name=/runtime.v1.RuntimeService/Version
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.272606709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5108821d-0cae-4b9d-be33-a5988bb0969e name=/runtime.v1.RuntimeService/Version
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.281000523Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=711f4915-dab1-44c0-b9de-4193a87cbf12 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.281747077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684826281716397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=711f4915-dab1-44c0-b9de-4193a87cbf12 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.282406021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e341d09-c6a6-4b4d-9853-d6b9e0250817 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.282506951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e341d09-c6a6-4b4d-9853-d6b9e0250817 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:20:26 kubernetes-upgrade-146394 crio[2321]: time="2024-08-15 01:20:26.283044380Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77039ecebf346f6abb828c67e596811b47c86f46f3f162714d84ef75bd028497,PodSandboxId:67044aba1f5d6fa3d75e5aa16c76144f0406175703876c2c7bf89e265f2b6f8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684822973670610,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-s5rfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cadf49b-a729-45fb-8b89-1409245d81fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59e6c7315c27571b2edafaff4c73a6aa8480895a2ad0d91dcead59effcd11fe0,PodSandboxId:0153e25f9901a55ad8d35e714d1b8c3c4cc66dd58e607e6fba90bfb309ca38c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684822992451604,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5g6xg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: fc9787db-ba23-48c7-9133-b7e5d10963c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe924b9c2632775aecf767467d680766ba2d1de53c9753be754d094b93a28824,PodSandboxId:5ae96a965fba9e961039179d6d97512e1585087d751d082c9ffcc85519a033b8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1723684822983853008,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26afacdc-c874-4017-b5d3-deccebeb4f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f104ad47ebe5299cd6349726a25b85f57b5ecb35141dac1f301fe6a0af18b99,PodSandboxId:6b5ecd2fe325d66bc68b23e5871ad981dbbeef6b66dc0b3f6c34ec2abc230fb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAI
NER_RUNNING,CreatedAt:1723684820121481463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ef6e2490856765cf874798c277eb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c654138c7a1592314839c2eb8251d6dab60fba4c0be8b9a3c36556b0b37f31a,PodSandboxId:34294112a6ed3517e7dbb5bb388ee3e5d34f743fabf1c5c69090802a66045014,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
,State:CONTAINER_RUNNING,CreatedAt:1723684820107683097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41ac7df1d512cf15bda3354bb1cebbe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84f917c798aaf8c950e5702f3bd30f4a7e0d18d28d2bb9e674f5a84b952e7f11,PodSandboxId:7dfb899c54bcdc18e81f5b92b23fd7492171ef9fe5a0d22b27c13cb441c340d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CO
NTAINER_RUNNING,CreatedAt:1723684810441279130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8r7r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c330db40-f081-4df7-b1a0-bf67217b2944,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a4237914e00cb848858a24561922ce8845496fa51f0352dc06421755d43eee4,PodSandboxId:5ae96a965fba9e961039179d6d97512e1585087d751d082c9ffcc85519a033b8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:
1723684809471780153,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26afacdc-c874-4017-b5d3-deccebeb4f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f54ac9a226ed4be7532fd0b0af18fe73e75862275b6bf8ede7e7d474f854f840,PodSandboxId:3d2607f30d7c516fb94da386c104cad8fcfc132fe6b7ddbfe5d87788847dd1ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723684804947403721,Labels:
map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2494acec48842a3f829bc787759928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:773b653a371bda4173cf3e8b1df8496804db22d092b454dc6cfd2f48be9afda0,PodSandboxId:f5f40a9ffa55ed66f6f8981ad594962d98cb283e595371a24989f2955d3bdd68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723684801949567973,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017ef7126df3f9037866256c2d7ab349,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:716b49e3e43bc1248e7b3c9f112a3f2cdb867a77d18a6120de9ec45d021b2c27,PodSandboxId:6b5ecd2fe325d66bc68b23e5871ad981dbbeef6b66dc0b3f6c34ec2abc230fb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723684801352030634,Labels:map[string]string{io.k
ubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ef6e2490856765cf874798c277eb6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70346e758ad31fbcb57743f9798f48dbd5413841922da8661037be22b086c911,PodSandboxId:34294112a6ed3517e7dbb5bb388ee3e5d34f743fabf1c5c69090802a66045014,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723684801268578708,Labels:map[string
]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41ac7df1d512cf15bda3354bb1cebbe,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c233e21e0862f8ea6cc51bcf563a091b56c605cfd90df15ce4c882a1984620,PodSandboxId:67044aba1f5d6fa3d75e5aa16c76144f0406175703876c2c7bf89e265f2b6f8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684801076387207,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-s5rfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cadf49b-a729-45fb-8b89-1409245d81fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22c9e4f8a0fd6dc8478d86bdaf0ca62350ebd82f9baeb18d0e6ff84feb0ba0fa,PodSandboxId:0153e25f9901a55ad8d35e714d1b8c3c4cc66dd58e607e6fba90bfb309ca38c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684800880192223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5g6xg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc9787db-ba23-48c7-9133-b7e5d10963c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8814a487efefb6ee6ac2c433b993d49a3e2fce10d33985b5843dac44e507f8,PodSandboxId:74577d667415bbefc1d5bfd2ac0fccb86128acbcacd27
368f7f130b96f345c59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723684768875118186,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8r7r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c330db40-f081-4df7-b1a0-bf67217b2944,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28f55b2bc46fe41a16da394e30326ab7982cdfbeaaca4a4783913aeb02bfe964,PodSandboxId:62e98ada55c4f82102dafeecdefddd567629828dbec591a26b47e037787c1f14,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723684758402287810,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2494acec48842a3f829bc787759928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d95e232348770918375a24e603d32bc770ac2219f62faa26f58a69c54fded4a,PodSandboxId:b239e67f4e10985bf1ec15cf4f8a1378bdbd6c4098e820d1ce297e4faf724f22,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723684758407212182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-146394,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017ef7126df3f9037866256c2d7ab349,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e341d09-c6a6-4b4d-9853-d6b9e0250817 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	59e6c7315c275       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   2                   0153e25f9901a       coredns-6f6b679f8f-5g6xg
	fe924b9c26327       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       2                   5ae96a965fba9       storage-provisioner
	77039ecebf346       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   2                   67044aba1f5d6       coredns-6f6b679f8f-s5rfn
	7f104ad47ebe5       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   6 seconds ago        Running             kube-controller-manager   2                   6b5ecd2fe325d       kube-controller-manager-kubernetes-upgrade-146394
	1c654138c7a15       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   6 seconds ago        Running             kube-apiserver            2                   34294112a6ed3       kube-apiserver-kubernetes-upgrade-146394
	84f917c798aaf       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   15 seconds ago       Running             kube-proxy                1                   7dfb899c54bcd       kube-proxy-8r7r2
	9a4237914e00c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago       Exited              storage-provisioner       1                   5ae96a965fba9       storage-provisioner
	f54ac9a226ed4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   21 seconds ago       Running             etcd                      1                   3d2607f30d7c5       etcd-kubernetes-upgrade-146394
	773b653a371bd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   24 seconds ago       Running             kube-scheduler            1                   f5f40a9ffa55e       kube-scheduler-kubernetes-upgrade-146394
	716b49e3e43bc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   25 seconds ago       Exited              kube-controller-manager   1                   6b5ecd2fe325d       kube-controller-manager-kubernetes-upgrade-146394
	70346e758ad31       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   25 seconds ago       Exited              kube-apiserver            1                   34294112a6ed3       kube-apiserver-kubernetes-upgrade-146394
	70c233e21e086       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   25 seconds ago       Exited              coredns                   1                   67044aba1f5d6       coredns-6f6b679f8f-s5rfn
	22c9e4f8a0fd6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   25 seconds ago       Exited              coredns                   1                   0153e25f9901a       coredns-6f6b679f8f-5g6xg
	8d8814a487efe       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   57 seconds ago       Exited              kube-proxy                0                   74577d667415b       kube-proxy-8r7r2
	0d95e23234877       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   About a minute ago   Exited              kube-scheduler            0                   b239e67f4e109       kube-scheduler-kubernetes-upgrade-146394
	28f55b2bc46fe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      0                   62e98ada55c4f       etcd-kubernetes-upgrade-146394
	
	
	==> coredns [22c9e4f8a0fd6dc8478d86bdaf0ca62350ebd82f9baeb18d0e6ff84feb0ba0fa] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [59e6c7315c27571b2edafaff4c73a6aa8480895a2ad0d91dcead59effcd11fe0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [70c233e21e0862f8ea6cc51bcf563a091b56c605cfd90df15ce4c882a1984620] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [77039ecebf346f6abb828c67e596811b47c86f46f3f162714d84ef75bd028497] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-146394
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-146394
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 01:19:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-146394
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:20:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 01:20:22 +0000   Thu, 15 Aug 2024 01:19:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 01:20:22 +0000   Thu, 15 Aug 2024 01:19:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 01:20:22 +0000   Thu, 15 Aug 2024 01:19:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 01:20:22 +0000   Thu, 15 Aug 2024 01:19:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.130
	  Hostname:    kubernetes-upgrade-146394
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 409454df1fb34e55b43bbafd5e17d9ed
	  System UUID:                409454df-1fb3-4e55-b43b-bafd5e17d9ed
	  Boot ID:                    c464fd37-6de5-444b-9134-7e78e320e661
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5g6xg                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     58s
	  kube-system                 coredns-6f6b679f8f-s5rfn                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     58s
	  kube-system                 etcd-kubernetes-upgrade-146394                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         59s
	  kube-system                 kube-apiserver-kubernetes-upgrade-146394             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-146394    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-proxy-8r7r2                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-kubernetes-upgrade-146394             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  69s (x8 over 70s)  kubelet          Node kubernetes-upgrade-146394 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s (x8 over 70s)  kubelet          Node kubernetes-upgrade-146394 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s (x7 over 70s)  kubelet          Node kubernetes-upgrade-146394 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           59s                node-controller  Node kubernetes-upgrade-146394 event: Registered Node kubernetes-upgrade-146394 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-146394 event: Registered Node kubernetes-upgrade-146394 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug15 01:19] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.060209] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065212] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.206138] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.118478] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.262565] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +3.869842] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +2.382137] systemd-fstab-generator[853]: Ignoring "noauto" option for root device
	[  +0.061470] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.754411] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.083322] kauditd_printk_skb: 69 callbacks suppressed
	[ +30.714008] systemd-fstab-generator[2198]: Ignoring "noauto" option for root device
	[  +0.078547] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.062735] systemd-fstab-generator[2210]: Ignoring "noauto" option for root device
	[  +0.177211] systemd-fstab-generator[2224]: Ignoring "noauto" option for root device
	[  +0.138299] systemd-fstab-generator[2236]: Ignoring "noauto" option for root device
	[  +0.284152] systemd-fstab-generator[2264]: Ignoring "noauto" option for root device
	[Aug15 01:20] systemd-fstab-generator[2432]: Ignoring "noauto" option for root device
	[  +4.350506] kauditd_printk_skb: 199 callbacks suppressed
	[  +5.487502] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.972957] kauditd_printk_skb: 1 callbacks suppressed
	[  +3.061623] systemd-fstab-generator[3423]: Ignoring "noauto" option for root device
	[  +3.634492] kauditd_printk_skb: 28 callbacks suppressed
	[  +1.242581] systemd-fstab-generator[3715]: Ignoring "noauto" option for root device
	
	
	==> etcd [28f55b2bc46fe41a16da394e30326ab7982cdfbeaaca4a4783913aeb02bfe964] <==
	{"level":"info","ts":"2024-08-15T01:19:19.216114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 became leader at term 2"}
	{"level":"info","ts":"2024-08-15T01:19:19.216121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce8d6acd2292c3b4 elected leader ce8d6acd2292c3b4 at term 2"}
	{"level":"info","ts":"2024-08-15T01:19:19.218116Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:19:19.219268Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ce8d6acd2292c3b4","local-member-attributes":"{Name:kubernetes-upgrade-146394 ClientURLs:[https://192.168.72.130:2379]}","request-path":"/0/members/ce8d6acd2292c3b4/attributes","cluster-id":"5be39efd7fce098c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T01:19:19.219381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:19:19.219861Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:19:19.220021Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5be39efd7fce098c","local-member-id":"ce8d6acd2292c3b4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:19:19.221506Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:19:19.221554Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:19:19.220640Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:19:19.222340Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T01:19:19.221256Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T01:19:19.222454Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T01:19:19.228873Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:19:19.229620Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.130:2379"}
	{"level":"info","ts":"2024-08-15T01:19:50.970163Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-15T01:19:50.970220Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-146394","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.130:2380"],"advertise-client-urls":["https://192.168.72.130:2379"]}
	{"level":"warn","ts":"2024-08-15T01:19:50.970311Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T01:19:50.970400Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T01:19:51.020106Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.130:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T01:19:51.020180Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.130:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T01:19:51.021598Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ce8d6acd2292c3b4","current-leader-member-id":"ce8d6acd2292c3b4"}
	{"level":"info","ts":"2024-08-15T01:19:51.023627Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.72.130:2380"}
	{"level":"info","ts":"2024-08-15T01:19:51.023691Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.72.130:2380"}
	{"level":"info","ts":"2024-08-15T01:19:51.023712Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-146394","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.130:2380"],"advertise-client-urls":["https://192.168.72.130:2379"]}
	
	
	==> etcd [f54ac9a226ed4be7532fd0b0af18fe73e75862275b6bf8ede7e7d474f854f840] <==
	{"level":"info","ts":"2024-08-15T01:20:05.067727Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:20:05.068007Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5be39efd7fce098c","local-member-id":"ce8d6acd2292c3b4","added-peer-id":"ce8d6acd2292c3b4","added-peer-peer-urls":["https://192.168.72.130:2380"]}
	{"level":"info","ts":"2024-08-15T01:20:05.068194Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5be39efd7fce098c","local-member-id":"ce8d6acd2292c3b4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:20:05.068361Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:20:05.071705Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T01:20:05.071876Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.130:2380"}
	{"level":"info","ts":"2024-08-15T01:20:05.072008Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.130:2380"}
	{"level":"info","ts":"2024-08-15T01:20:05.073096Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ce8d6acd2292c3b4","initial-advertise-peer-urls":["https://192.168.72.130:2380"],"listen-peer-urls":["https://192.168.72.130:2380"],"advertise-client-urls":["https://192.168.72.130:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.130:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T01:20:05.073117Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T01:20:06.657328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T01:20:06.657376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T01:20:06.657415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 received MsgPreVoteResp from ce8d6acd2292c3b4 at term 2"}
	{"level":"info","ts":"2024-08-15T01:20:06.657433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T01:20:06.657441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 received MsgVoteResp from ce8d6acd2292c3b4 at term 3"}
	{"level":"info","ts":"2024-08-15T01:20:06.657454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 became leader at term 3"}
	{"level":"info","ts":"2024-08-15T01:20:06.657484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce8d6acd2292c3b4 elected leader ce8d6acd2292c3b4 at term 3"}
	{"level":"info","ts":"2024-08-15T01:20:06.735994Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ce8d6acd2292c3b4","local-member-attributes":"{Name:kubernetes-upgrade-146394 ClientURLs:[https://192.168.72.130:2379]}","request-path":"/0/members/ce8d6acd2292c3b4/attributes","cluster-id":"5be39efd7fce098c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T01:20:06.736055Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:20:06.736469Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:20:06.737520Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:20:06.738733Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T01:20:06.739612Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:20:06.740735Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.130:2379"}
	{"level":"info","ts":"2024-08-15T01:20:06.743051Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T01:20:06.743114Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:20:26 up 1 min,  0 users,  load average: 1.41, 0.44, 0.15
	Linux kubernetes-upgrade-146394 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1c654138c7a1592314839c2eb8251d6dab60fba4c0be8b9a3c36556b0b37f31a] <==
	I0815 01:20:22.129807       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 01:20:22.135403       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 01:20:22.135769       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 01:20:22.135908       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 01:20:22.136027       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 01:20:22.136383       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 01:20:22.136434       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 01:20:22.137179       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 01:20:22.138380       1 aggregator.go:171] initial CRD sync complete...
	I0815 01:20:22.138432       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 01:20:22.138459       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 01:20:22.138483       1 cache.go:39] Caches are synced for autoregister controller
	I0815 01:20:22.168088       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 01:20:22.171360       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 01:20:22.171419       1 policy_source.go:224] refreshing policies
	I0815 01:20:22.241001       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 01:20:23.050675       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 01:20:23.455894       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.130]
	I0815 01:20:23.457065       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 01:20:23.461241       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 01:20:23.968404       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 01:20:23.986488       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 01:20:24.024690       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 01:20:24.159552       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 01:20:24.169299       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [70346e758ad31fbcb57743f9798f48dbd5413841922da8661037be22b086c911] <==
	I0815 01:20:08.175651       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	E0815 01:20:08.176399       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for LocalAvailability controller" logger="UnhandledError"
	E0815 01:20:08.176464       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for cluster_authentication_trust_controller" logger="UnhandledError"
	I0815 01:20:08.176499       1 crd_finalizer.go:273] Shutting down CRDFinalizer
	I0815 01:20:08.176530       1 apiapproval_controller.go:193] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0815 01:20:08.176559       1 nonstructuralschema_controller.go:199] Shutting down NonStructuralSchemaConditionController
	I0815 01:20:08.176593       1 establishing_controller.go:85] Shutting down EstablishingController
	I0815 01:20:08.176629       1 naming_controller.go:298] Shutting down NamingConditionController
	E0815 01:20:08.176670       1 controller.go:95] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E0815 01:20:08.176709       1 controller.go:148] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E0815 01:20:08.176748       1 system_namespaces_controller.go:69] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E0815 01:20:08.176785       1 controller.go:89] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E0815 01:20:08.176815       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for RemoteAvailability controller" logger="UnhandledError"
	E0815 01:20:08.176843       1 customresource_discovery_controller.go:295] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E0815 01:20:08.176872       1 gc_controller.go:84] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E0815 01:20:08.176900       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for configmaps" logger="UnhandledError"
	E0815 01:20:08.176973       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for APIServiceRegistrationController controller" logger="UnhandledError"
	F0815 01:20:08.177006       1 hooks.go:210] PostStartHook "crd-informer-synced" failed: timed out waiting for the condition
	F0815 01:20:08.268669       1 hooks.go:210] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	I0815 01:20:08.352104       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 01:20:08.352250       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 01:20:08.353600       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0815 01:20:08.353649       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0815 01:20:08.353674       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0815 01:20:08.353696       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [716b49e3e43bc1248e7b3c9f112a3f2cdb867a77d18a6120de9ec45d021b2c27] <==
	
	
	==> kube-controller-manager [7f104ad47ebe5299cd6349726a25b85f57b5ecb35141dac1f301fe6a0af18b99] <==
	I0815 01:20:25.560323       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"kubernetes-upgrade-146394\" does not exist"
	I0815 01:20:25.564084       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0815 01:20:25.572371       1 shared_informer.go:320] Caches are synced for GC
	I0815 01:20:25.587657       1 shared_informer.go:320] Caches are synced for node
	I0815 01:20:25.587890       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0815 01:20:25.588037       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0815 01:20:25.588109       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0815 01:20:25.588122       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0815 01:20:25.588220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-146394"
	I0815 01:20:25.595091       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0815 01:20:25.595161       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-146394"
	I0815 01:20:25.602543       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0815 01:20:25.613364       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 01:20:25.650465       1 shared_informer.go:320] Caches are synced for daemon sets
	I0815 01:20:25.653088       1 shared_informer.go:320] Caches are synced for persistent volume
	I0815 01:20:25.653504       1 shared_informer.go:320] Caches are synced for taint
	I0815 01:20:25.653686       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0815 01:20:25.655662       1 shared_informer.go:320] Caches are synced for TTL
	I0815 01:20:25.656240       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-146394"
	I0815 01:20:25.656354       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0815 01:20:25.704715       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 01:20:25.757064       1 shared_informer.go:320] Caches are synced for attach detach
	I0815 01:20:26.146545       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 01:20:26.150922       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 01:20:26.150991       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [84f917c798aaf8c950e5702f3bd30f4a7e0d18d28d2bb9e674f5a84b952e7f11] <==
	I0815 01:20:10.560156       1 server_linux.go:66] "Using iptables proxy"
	E0815 01:20:10.583073       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:20:10.599064       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:20:10.601506       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-146394\": dial tcp 192.168.72.130:8443: connect: connection refused"
	E0815 01:20:11.641214       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-146394\": dial tcp 192.168.72.130:8443: connect: connection refused"
	E0815 01:20:13.767468       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-146394\": dial tcp 192.168.72.130:8443: connect: connection refused"
	E0815 01:20:18.093406       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-146394\": dial tcp 192.168.72.130:8443: connect: connection refused"
	
	
	==> kube-proxy [8d8814a487efefb6ee6ac2c433b993d49a3e2fce10d33985b5843dac44e507f8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:19:29.212531       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 01:19:29.241093       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.130"]
	E0815 01:19:29.241179       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 01:19:29.370092       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 01:19:29.370156       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 01:19:29.370198       1 server_linux.go:169] "Using iptables Proxier"
	I0815 01:19:29.381131       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 01:19:29.381418       1 server.go:483] "Version info" version="v1.31.0"
	I0815 01:19:29.381444       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:19:29.383357       1 config.go:197] "Starting service config controller"
	I0815 01:19:29.383481       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 01:19:29.383531       1 config.go:104] "Starting endpoint slice config controller"
	I0815 01:19:29.383549       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 01:19:29.384503       1 config.go:326] "Starting node config controller"
	I0815 01:19:29.384585       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 01:19:29.484990       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 01:19:29.485482       1 shared_informer.go:320] Caches are synced for service config
	I0815 01:19:29.487007       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0d95e232348770918375a24e603d32bc770ac2219f62faa26f58a69c54fded4a] <==
	E0815 01:19:21.059229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:19:21.062762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 01:19:21.062835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:19:21.063006       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 01:19:21.063080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 01:19:21.871655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 01:19:21.871768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:19:21.918622       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 01:19:21.918742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:19:21.943402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 01:19:21.943568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:19:21.997997       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 01:19:21.998075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:19:22.038093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 01:19:22.038216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 01:19:22.067469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 01:19:22.067538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 01:19:22.140913       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 01:19:22.141035       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 01:19:22.159061       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 01:19:22.159165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:19:22.168183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 01:19:22.168272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0815 01:19:25.338007       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 01:19:50.963365       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [773b653a371bda4173cf3e8b1df8496804db22d092b454dc6cfd2f48be9afda0] <==
	E0815 01:20:15.463314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.72.130:8443/apis/apps/v1/statefulsets?resourceVersion=402\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	W0815 01:20:16.225374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.72.130:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=402": dial tcp 192.168.72.130:8443: connect: connection refused
	E0815 01:20:16.225471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.72.130:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=402\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	W0815 01:20:16.269843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.130:8443/apis/storage.k8s.io/v1/csistoragecapacities?resourceVersion=402": dial tcp 192.168.72.130:8443: connect: connection refused
	E0815 01:20:16.269923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.72.130:8443/apis/storage.k8s.io/v1/csistoragecapacities?resourceVersion=402\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	W0815 01:20:16.395131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.72.130:8443/apis/apps/v1/replicasets?resourceVersion=402": dial tcp 192.168.72.130:8443: connect: connection refused
	E0815 01:20:16.395185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.72.130:8443/apis/apps/v1/replicasets?resourceVersion=402\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	W0815 01:20:16.479073       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.72.130:8443/api/v1/persistentvolumes?resourceVersion=402": dial tcp 192.168.72.130:8443: connect: connection refused
	E0815 01:20:16.479114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.72.130:8443/api/v1/persistentvolumes?resourceVersion=402\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	W0815 01:20:16.518513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.72.130:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=402": dial tcp 192.168.72.130:8443: connect: connection refused
	E0815 01:20:16.518573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.72.130:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=402\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	W0815 01:20:16.660011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.72.130:8443/api/v1/replicationcontrollers?resourceVersion=402": dial tcp 192.168.72.130:8443: connect: connection refused
	E0815 01:20:16.660068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.72.130:8443/api/v1/replicationcontrollers?resourceVersion=402\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	W0815 01:20:16.854713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.72.130:8443/api/v1/namespaces?resourceVersion=402": dial tcp 192.168.72.130:8443: connect: connection refused
	E0815 01:20:16.854773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.72.130:8443/api/v1/namespaces?resourceVersion=402\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	W0815 01:20:17.040748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.72.130:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=402": dial tcp 192.168.72.130:8443: connect: connection refused
	E0815 01:20:17.040804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.72.130:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=402\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	W0815 01:20:17.427593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.72.130:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=402": dial tcp 192.168.72.130:8443: connect: connection refused
	E0815 01:20:17.427642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.72.130:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=402\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	W0815 01:20:17.777927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.72.130:8443/api/v1/services?resourceVersion=403": dial tcp 192.168.72.130:8443: connect: connection refused
	E0815 01:20:17.778049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.72.130:8443/api/v1/services?resourceVersion=403\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	W0815 01:20:18.124507       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.72.130:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=402": dial tcp 192.168.72.130:8443: connect: connection refused
	E0815 01:20:18.124585       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.72.130:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=402\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	W0815 01:20:18.699058       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.72.130:8443/api/v1/nodes?resourceVersion=402": dial tcp 192.168.72.130:8443: connect: connection refused
	E0815 01:20:18.699126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.72.130:8443/api/v1/nodes?resourceVersion=402\": dial tcp 192.168.72.130:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 15 01:20:19 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:19.869366    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/0b2494acec48842a3f829bc787759928-etcd-certs\") pod \"etcd-kubernetes-upgrade-146394\" (UID: \"0b2494acec48842a3f829bc787759928\") " pod="kube-system/etcd-kubernetes-upgrade-146394"
	Aug 15 01:20:19 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:19.869431    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/0b2494acec48842a3f829bc787759928-etcd-data\") pod \"etcd-kubernetes-upgrade-146394\" (UID: \"0b2494acec48842a3f829bc787759928\") " pod="kube-system/etcd-kubernetes-upgrade-146394"
	Aug 15 01:20:19 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:19.869450    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a41ac7df1d512cf15bda3354bb1cebbe-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-146394\" (UID: \"a41ac7df1d512cf15bda3354bb1cebbe\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-146394"
	Aug 15 01:20:19 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:19.869471    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a41ac7df1d512cf15bda3354bb1cebbe-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-146394\" (UID: \"a41ac7df1d512cf15bda3354bb1cebbe\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-146394"
	Aug 15 01:20:19 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:19.869490    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a41ac7df1d512cf15bda3354bb1cebbe-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-146394\" (UID: \"a41ac7df1d512cf15bda3354bb1cebbe\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-146394"
	Aug 15 01:20:19 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:19.869536    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17ef6e2490856765cf874798c277eb6b-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-146394\" (UID: \"17ef6e2490856765cf874798c277eb6b\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-146394"
	Aug 15 01:20:20 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:20.050857    3430 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-146394"
	Aug 15 01:20:20 kubernetes-upgrade-146394 kubelet[3430]: E0815 01:20:20.052013    3430 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.130:8443: connect: connection refused" node="kubernetes-upgrade-146394"
	Aug 15 01:20:20 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:20.094822    3430 scope.go:117] "RemoveContainer" containerID="70346e758ad31fbcb57743f9798f48dbd5413841922da8661037be22b086c911"
	Aug 15 01:20:20 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:20.095094    3430 scope.go:117] "RemoveContainer" containerID="716b49e3e43bc1248e7b3c9f112a3f2cdb867a77d18a6120de9ec45d021b2c27"
	Aug 15 01:20:20 kubernetes-upgrade-146394 kubelet[3430]: E0815 01:20:20.270220    3430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-146394?timeout=10s\": dial tcp 192.168.72.130:8443: connect: connection refused" interval="800ms"
	Aug 15 01:20:20 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:20.454000    3430 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-146394"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:22.228727    3430 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-146394"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:22.229200    3430 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-146394"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:22.229278    3430 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:22.230501    3430 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:22.655349    3430 apiserver.go:52] "Watching apiserver"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:22.666283    3430 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:22.744259    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c330db40-f081-4df7-b1a0-bf67217b2944-lib-modules\") pod \"kube-proxy-8r7r2\" (UID: \"c330db40-f081-4df7-b1a0-bf67217b2944\") " pod="kube-system/kube-proxy-8r7r2"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:22.744309    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/26afacdc-c874-4017-b5d3-deccebeb4f9e-tmp\") pod \"storage-provisioner\" (UID: \"26afacdc-c874-4017-b5d3-deccebeb4f9e\") " pod="kube-system/storage-provisioner"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:22.744359    3430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c330db40-f081-4df7-b1a0-bf67217b2944-xtables-lock\") pod \"kube-proxy-8r7r2\" (UID: \"c330db40-f081-4df7-b1a0-bf67217b2944\") " pod="kube-system/kube-proxy-8r7r2"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: E0815 01:20:22.846872    3430 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-146394\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-146394"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:22.961618    3430 scope.go:117] "RemoveContainer" containerID="70c233e21e0862f8ea6cc51bcf563a091b56c605cfd90df15ce4c882a1984620"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:22.963525    3430 scope.go:117] "RemoveContainer" containerID="22c9e4f8a0fd6dc8478d86bdaf0ca62350ebd82f9baeb18d0e6ff84feb0ba0fa"
	Aug 15 01:20:22 kubernetes-upgrade-146394 kubelet[3430]: I0815 01:20:22.963703    3430 scope.go:117] "RemoveContainer" containerID="9a4237914e00cb848858a24561922ce8845496fa51f0352dc06421755d43eee4"
	
	
	==> storage-provisioner [9a4237914e00cb848858a24561922ce8845496fa51f0352dc06421755d43eee4] <==
	I0815 01:20:09.589380       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0815 01:20:09.594098       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [fe924b9c2632775aecf767467d680766ba2d1de53c9753be754d094b93a28824] <==
	I0815 01:20:23.160611       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 01:20:23.175657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 01:20:23.178191       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:20:25.674810   64423 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19443-13088/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-146394 -n kubernetes-upgrade-146394
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-146394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-146394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-146394
--- FAIL: TestKubernetesUpgrade (410.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (265.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-390782 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-390782 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m25.119264672s)

                                                
                                                
-- stdout --
	* [old-k8s-version-390782] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-390782" primary control-plane node in "old-k8s-version-390782" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:18:15.261046   62901 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:18:15.261270   62901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:18:15.261278   62901 out.go:304] Setting ErrFile to fd 2...
	I0815 01:18:15.261283   62901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:18:15.261453   62901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:18:15.261994   62901 out.go:298] Setting JSON to false
	I0815 01:18:15.262914   62901 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7240,"bootTime":1723677455,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:18:15.262972   62901 start.go:139] virtualization: kvm guest
	I0815 01:18:15.265137   62901 out.go:177] * [old-k8s-version-390782] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:18:15.266305   62901 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:18:15.266302   62901 notify.go:220] Checking for updates...
	I0815 01:18:15.267704   62901 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:18:15.269044   62901 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:18:15.270391   62901 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:18:15.271656   62901 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:18:15.273019   62901 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:18:15.274956   62901 config.go:182] Loaded profile config "cert-expiration-131152": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:18:15.275083   62901 config.go:182] Loaded profile config "kubernetes-upgrade-146394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:18:15.275210   62901 config.go:182] Loaded profile config "pause-064537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:18:15.275308   62901 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:18:15.311461   62901 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 01:18:15.312881   62901 start.go:297] selected driver: kvm2
	I0815 01:18:15.312912   62901 start.go:901] validating driver "kvm2" against <nil>
	I0815 01:18:15.312925   62901 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:18:15.313631   62901 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:18:15.313710   62901 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:18:15.328565   62901 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:18:15.328646   62901 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 01:18:15.328888   62901 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:18:15.328922   62901 cni.go:84] Creating CNI manager for ""
	I0815 01:18:15.328929   62901 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:18:15.328938   62901 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 01:18:15.328985   62901 start.go:340] cluster config:
	{Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:18:15.329107   62901 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:18:15.330892   62901 out.go:177] * Starting "old-k8s-version-390782" primary control-plane node in "old-k8s-version-390782" cluster
	I0815 01:18:15.331965   62901 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 01:18:15.331999   62901 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:18:15.332008   62901 cache.go:56] Caching tarball of preloaded images
	I0815 01:18:15.332074   62901 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:18:15.332084   62901 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 01:18:15.332171   62901 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/config.json ...
	I0815 01:18:15.332187   62901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/config.json: {Name:mkf0346f19b6e6d7119ca76efe114063f5674cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:18:15.332312   62901 start.go:360] acquireMachinesLock for old-k8s-version-390782: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:18:15.332344   62901 start.go:364] duration metric: took 17.007µs to acquireMachinesLock for "old-k8s-version-390782"
	I0815 01:18:15.332358   62901 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:18:15.332413   62901 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 01:18:15.333754   62901 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 01:18:15.333887   62901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:18:15.333924   62901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:18:15.348472   62901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45583
	I0815 01:18:15.348937   62901 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:18:15.349509   62901 main.go:141] libmachine: Using API Version  1
	I0815 01:18:15.349538   62901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:18:15.349884   62901 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:18:15.350094   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:18:15.350295   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:18:15.350507   62901 start.go:159] libmachine.API.Create for "old-k8s-version-390782" (driver="kvm2")
	I0815 01:18:15.350535   62901 client.go:168] LocalClient.Create starting
	I0815 01:18:15.350581   62901 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem
	I0815 01:18:15.350617   62901 main.go:141] libmachine: Decoding PEM data...
	I0815 01:18:15.350642   62901 main.go:141] libmachine: Parsing certificate...
	I0815 01:18:15.350710   62901 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem
	I0815 01:18:15.350738   62901 main.go:141] libmachine: Decoding PEM data...
	I0815 01:18:15.350765   62901 main.go:141] libmachine: Parsing certificate...
	I0815 01:18:15.350788   62901 main.go:141] libmachine: Running pre-create checks...
	I0815 01:18:15.350798   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .PreCreateCheck
	I0815 01:18:15.351144   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetConfigRaw
	I0815 01:18:15.351519   62901 main.go:141] libmachine: Creating machine...
	I0815 01:18:15.351532   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .Create
	I0815 01:18:15.351686   62901 main.go:141] libmachine: (old-k8s-version-390782) Creating KVM machine...
	I0815 01:18:15.352916   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found existing default KVM network
	I0815 01:18:15.354153   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:15.354012   62925 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:73:88:a8} reservation:<nil>}
	I0815 01:18:15.355219   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:15.355146   62925 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000288990}
	I0815 01:18:15.355236   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | created network xml: 
	I0815 01:18:15.355244   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | <network>
	I0815 01:18:15.355250   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG |   <name>mk-old-k8s-version-390782</name>
	I0815 01:18:15.355272   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG |   <dns enable='no'/>
	I0815 01:18:15.355279   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG |   
	I0815 01:18:15.355289   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0815 01:18:15.355299   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG |     <dhcp>
	I0815 01:18:15.355309   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0815 01:18:15.355323   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG |     </dhcp>
	I0815 01:18:15.355334   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG |   </ip>
	I0815 01:18:15.355346   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG |   
	I0815 01:18:15.355358   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | </network>
	I0815 01:18:15.355368   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | 
	I0815 01:18:15.360118   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | trying to create private KVM network mk-old-k8s-version-390782 192.168.50.0/24...
	I0815 01:18:15.428315   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | private KVM network mk-old-k8s-version-390782 192.168.50.0/24 created
	I0815 01:18:15.428363   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:15.428284   62925 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:18:15.428384   62901 main.go:141] libmachine: (old-k8s-version-390782) Setting up store path in /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782 ...
	I0815 01:18:15.428407   62901 main.go:141] libmachine: (old-k8s-version-390782) Building disk image from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 01:18:15.428492   62901 main.go:141] libmachine: (old-k8s-version-390782) Downloading /home/jenkins/minikube-integration/19443-13088/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 01:18:15.676415   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:15.676256   62925 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa...
	I0815 01:18:15.913507   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:15.913356   62925 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/old-k8s-version-390782.rawdisk...
	I0815 01:18:15.913558   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Writing magic tar header
	I0815 01:18:15.913595   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Writing SSH key tar header
	I0815 01:18:15.913613   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:15.913528   62925 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782 ...
	I0815 01:18:15.913761   62901 main.go:141] libmachine: (old-k8s-version-390782) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782 (perms=drwx------)
	I0815 01:18:15.913786   62901 main.go:141] libmachine: (old-k8s-version-390782) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines (perms=drwxr-xr-x)
	I0815 01:18:15.913799   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782
	I0815 01:18:15.913815   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines
	I0815 01:18:15.913830   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:18:15.913845   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088
	I0815 01:18:15.913858   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 01:18:15.913872   62901 main.go:141] libmachine: (old-k8s-version-390782) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube (perms=drwxr-xr-x)
	I0815 01:18:15.913898   62901 main.go:141] libmachine: (old-k8s-version-390782) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088 (perms=drwxrwxr-x)
	I0815 01:18:15.913917   62901 main.go:141] libmachine: (old-k8s-version-390782) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 01:18:15.913926   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Checking permissions on dir: /home/jenkins
	I0815 01:18:15.913941   62901 main.go:141] libmachine: (old-k8s-version-390782) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 01:18:15.913955   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Checking permissions on dir: /home
	I0815 01:18:15.913972   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Skipping /home - not owner
	I0815 01:18:15.913987   62901 main.go:141] libmachine: (old-k8s-version-390782) Creating domain...
	I0815 01:18:15.915132   62901 main.go:141] libmachine: (old-k8s-version-390782) define libvirt domain using xml: 
	I0815 01:18:15.915165   62901 main.go:141] libmachine: (old-k8s-version-390782) <domain type='kvm'>
	I0815 01:18:15.915178   62901 main.go:141] libmachine: (old-k8s-version-390782)   <name>old-k8s-version-390782</name>
	I0815 01:18:15.915189   62901 main.go:141] libmachine: (old-k8s-version-390782)   <memory unit='MiB'>2200</memory>
	I0815 01:18:15.915202   62901 main.go:141] libmachine: (old-k8s-version-390782)   <vcpu>2</vcpu>
	I0815 01:18:15.915208   62901 main.go:141] libmachine: (old-k8s-version-390782)   <features>
	I0815 01:18:15.915216   62901 main.go:141] libmachine: (old-k8s-version-390782)     <acpi/>
	I0815 01:18:15.915221   62901 main.go:141] libmachine: (old-k8s-version-390782)     <apic/>
	I0815 01:18:15.915236   62901 main.go:141] libmachine: (old-k8s-version-390782)     <pae/>
	I0815 01:18:15.915243   62901 main.go:141] libmachine: (old-k8s-version-390782)     
	I0815 01:18:15.915249   62901 main.go:141] libmachine: (old-k8s-version-390782)   </features>
	I0815 01:18:15.915256   62901 main.go:141] libmachine: (old-k8s-version-390782)   <cpu mode='host-passthrough'>
	I0815 01:18:15.915261   62901 main.go:141] libmachine: (old-k8s-version-390782)   
	I0815 01:18:15.915268   62901 main.go:141] libmachine: (old-k8s-version-390782)   </cpu>
	I0815 01:18:15.915273   62901 main.go:141] libmachine: (old-k8s-version-390782)   <os>
	I0815 01:18:15.915277   62901 main.go:141] libmachine: (old-k8s-version-390782)     <type>hvm</type>
	I0815 01:18:15.915285   62901 main.go:141] libmachine: (old-k8s-version-390782)     <boot dev='cdrom'/>
	I0815 01:18:15.915290   62901 main.go:141] libmachine: (old-k8s-version-390782)     <boot dev='hd'/>
	I0815 01:18:15.915298   62901 main.go:141] libmachine: (old-k8s-version-390782)     <bootmenu enable='no'/>
	I0815 01:18:15.915302   62901 main.go:141] libmachine: (old-k8s-version-390782)   </os>
	I0815 01:18:15.915318   62901 main.go:141] libmachine: (old-k8s-version-390782)   <devices>
	I0815 01:18:15.915326   62901 main.go:141] libmachine: (old-k8s-version-390782)     <disk type='file' device='cdrom'>
	I0815 01:18:15.915336   62901 main.go:141] libmachine: (old-k8s-version-390782)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/boot2docker.iso'/>
	I0815 01:18:15.915343   62901 main.go:141] libmachine: (old-k8s-version-390782)       <target dev='hdc' bus='scsi'/>
	I0815 01:18:15.915349   62901 main.go:141] libmachine: (old-k8s-version-390782)       <readonly/>
	I0815 01:18:15.915354   62901 main.go:141] libmachine: (old-k8s-version-390782)     </disk>
	I0815 01:18:15.915360   62901 main.go:141] libmachine: (old-k8s-version-390782)     <disk type='file' device='disk'>
	I0815 01:18:15.915377   62901 main.go:141] libmachine: (old-k8s-version-390782)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 01:18:15.915389   62901 main.go:141] libmachine: (old-k8s-version-390782)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/old-k8s-version-390782.rawdisk'/>
	I0815 01:18:15.915397   62901 main.go:141] libmachine: (old-k8s-version-390782)       <target dev='hda' bus='virtio'/>
	I0815 01:18:15.915402   62901 main.go:141] libmachine: (old-k8s-version-390782)     </disk>
	I0815 01:18:15.915409   62901 main.go:141] libmachine: (old-k8s-version-390782)     <interface type='network'>
	I0815 01:18:15.915417   62901 main.go:141] libmachine: (old-k8s-version-390782)       <source network='mk-old-k8s-version-390782'/>
	I0815 01:18:15.915428   62901 main.go:141] libmachine: (old-k8s-version-390782)       <model type='virtio'/>
	I0815 01:18:15.915433   62901 main.go:141] libmachine: (old-k8s-version-390782)     </interface>
	I0815 01:18:15.915442   62901 main.go:141] libmachine: (old-k8s-version-390782)     <interface type='network'>
	I0815 01:18:15.915448   62901 main.go:141] libmachine: (old-k8s-version-390782)       <source network='default'/>
	I0815 01:18:15.915455   62901 main.go:141] libmachine: (old-k8s-version-390782)       <model type='virtio'/>
	I0815 01:18:15.915461   62901 main.go:141] libmachine: (old-k8s-version-390782)     </interface>
	I0815 01:18:15.915466   62901 main.go:141] libmachine: (old-k8s-version-390782)     <serial type='pty'>
	I0815 01:18:15.915472   62901 main.go:141] libmachine: (old-k8s-version-390782)       <target port='0'/>
	I0815 01:18:15.915478   62901 main.go:141] libmachine: (old-k8s-version-390782)     </serial>
	I0815 01:18:15.915484   62901 main.go:141] libmachine: (old-k8s-version-390782)     <console type='pty'>
	I0815 01:18:15.915491   62901 main.go:141] libmachine: (old-k8s-version-390782)       <target type='serial' port='0'/>
	I0815 01:18:15.915496   62901 main.go:141] libmachine: (old-k8s-version-390782)     </console>
	I0815 01:18:15.915503   62901 main.go:141] libmachine: (old-k8s-version-390782)     <rng model='virtio'>
	I0815 01:18:15.915510   62901 main.go:141] libmachine: (old-k8s-version-390782)       <backend model='random'>/dev/random</backend>
	I0815 01:18:15.915517   62901 main.go:141] libmachine: (old-k8s-version-390782)     </rng>
	I0815 01:18:15.915525   62901 main.go:141] libmachine: (old-k8s-version-390782)     
	I0815 01:18:15.915534   62901 main.go:141] libmachine: (old-k8s-version-390782)     
	I0815 01:18:15.915541   62901 main.go:141] libmachine: (old-k8s-version-390782)   </devices>
	I0815 01:18:15.915550   62901 main.go:141] libmachine: (old-k8s-version-390782) </domain>
	I0815 01:18:15.915560   62901 main.go:141] libmachine: (old-k8s-version-390782) 
	I0815 01:18:15.919941   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:df:3d:81 in network default
	I0815 01:18:15.920701   62901 main.go:141] libmachine: (old-k8s-version-390782) Ensuring networks are active...
	I0815 01:18:15.920726   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:15.921463   62901 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network default is active
	I0815 01:18:15.921779   62901 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network mk-old-k8s-version-390782 is active
	I0815 01:18:15.922248   62901 main.go:141] libmachine: (old-k8s-version-390782) Getting domain xml...
	I0815 01:18:15.922882   62901 main.go:141] libmachine: (old-k8s-version-390782) Creating domain...
	I0815 01:18:17.168776   62901 main.go:141] libmachine: (old-k8s-version-390782) Waiting to get IP...
	I0815 01:18:17.169581   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:17.170038   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:17.170058   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:17.170016   62925 retry.go:31] will retry after 311.149741ms: waiting for machine to come up
	I0815 01:18:17.482677   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:17.483252   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:17.483279   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:17.483204   62925 retry.go:31] will retry after 276.633139ms: waiting for machine to come up
	I0815 01:18:17.761773   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:17.762240   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:17.762265   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:17.762199   62925 retry.go:31] will retry after 356.697583ms: waiting for machine to come up
	I0815 01:18:18.120675   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:18.121283   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:18.121320   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:18.121234   62925 retry.go:31] will retry after 585.242983ms: waiting for machine to come up
	I0815 01:18:18.707730   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:18.708228   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:18.708280   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:18.708163   62925 retry.go:31] will retry after 689.67481ms: waiting for machine to come up
	I0815 01:18:19.398920   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:19.399454   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:19.399480   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:19.399409   62925 retry.go:31] will retry after 825.287626ms: waiting for machine to come up
	I0815 01:18:20.225656   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:20.226134   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:20.226162   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:20.226066   62925 retry.go:31] will retry after 737.299892ms: waiting for machine to come up
	I0815 01:18:20.964587   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:20.965182   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:20.965207   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:20.965130   62925 retry.go:31] will retry after 1.430015488s: waiting for machine to come up
	I0815 01:18:22.396726   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:22.397262   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:22.397288   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:22.397215   62925 retry.go:31] will retry after 1.383910462s: waiting for machine to come up
	I0815 01:18:23.782390   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:23.782843   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:23.782864   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:23.782797   62925 retry.go:31] will retry after 2.264972596s: waiting for machine to come up
	I0815 01:18:26.049330   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:26.049825   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:26.049854   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:26.049780   62925 retry.go:31] will retry after 2.624299371s: waiting for machine to come up
	I0815 01:18:28.677599   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:28.678123   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:28.678145   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:28.678082   62925 retry.go:31] will retry after 3.117191657s: waiting for machine to come up
	I0815 01:18:31.798051   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:31.798633   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:18:31.798657   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:18:31.798584   62925 retry.go:31] will retry after 3.580140298s: waiting for machine to come up
	I0815 01:18:35.380019   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:35.380569   62901 main.go:141] libmachine: (old-k8s-version-390782) Found IP for machine: 192.168.50.21
	I0815 01:18:35.380591   62901 main.go:141] libmachine: (old-k8s-version-390782) Reserving static IP address...
	I0815 01:18:35.380606   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has current primary IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:35.380988   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"} in network mk-old-k8s-version-390782
	I0815 01:18:35.458829   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Getting to WaitForSSH function...
	I0815 01:18:35.458860   62901 main.go:141] libmachine: (old-k8s-version-390782) Reserved static IP address: 192.168.50.21
	I0815 01:18:35.458879   62901 main.go:141] libmachine: (old-k8s-version-390782) Waiting for SSH to be available...
	I0815 01:18:35.461598   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:35.462086   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:35.462116   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:35.462256   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH client type: external
	I0815 01:18:35.462285   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa (-rw-------)
	I0815 01:18:35.462328   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:18:35.462349   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | About to run SSH command:
	I0815 01:18:35.462362   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | exit 0
	I0815 01:18:35.584924   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | SSH cmd err, output: <nil>: 
	I0815 01:18:35.585133   62901 main.go:141] libmachine: (old-k8s-version-390782) KVM machine creation complete!
	I0815 01:18:35.585552   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetConfigRaw
	I0815 01:18:35.586131   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:18:35.586345   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:18:35.586523   62901 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 01:18:35.586557   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetState
	I0815 01:18:35.588011   62901 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 01:18:35.588029   62901 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 01:18:35.588036   62901 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 01:18:35.588044   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:18:35.590687   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:35.591185   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:35.591223   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:35.591314   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:18:35.591482   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:35.591589   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:35.591723   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:18:35.591903   62901 main.go:141] libmachine: Using SSH client type: native
	I0815 01:18:35.592088   62901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:18:35.592103   62901 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 01:18:35.688105   62901 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:18:35.688133   62901 main.go:141] libmachine: Detecting the provisioner...
	I0815 01:18:35.688145   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:18:35.690847   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:35.691226   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:35.691254   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:35.691397   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:18:35.691587   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:35.691769   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:35.691912   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:18:35.692088   62901 main.go:141] libmachine: Using SSH client type: native
	I0815 01:18:35.692285   62901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:18:35.692298   62901 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 01:18:35.789590   62901 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 01:18:35.789678   62901 main.go:141] libmachine: found compatible host: buildroot
	I0815 01:18:35.789691   62901 main.go:141] libmachine: Provisioning with buildroot...
	I0815 01:18:35.789704   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:18:35.789958   62901 buildroot.go:166] provisioning hostname "old-k8s-version-390782"
	I0815 01:18:35.789988   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:18:35.790196   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:18:35.793502   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:35.793998   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:35.794032   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:35.794160   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:18:35.794372   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:35.794627   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:35.794834   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:18:35.795022   62901 main.go:141] libmachine: Using SSH client type: native
	I0815 01:18:35.795236   62901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:18:35.795249   62901 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-390782 && echo "old-k8s-version-390782" | sudo tee /etc/hostname
	I0815 01:18:35.911921   62901 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-390782
	
	I0815 01:18:35.911944   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:18:35.915479   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:35.915959   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:35.916005   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:35.916193   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:18:35.916397   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:35.916559   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:35.916740   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:18:35.916901   62901 main.go:141] libmachine: Using SSH client type: native
	I0815 01:18:35.917074   62901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:18:35.917097   62901 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-390782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-390782/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-390782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:18:36.025041   62901 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:18:36.025087   62901 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:18:36.025123   62901 buildroot.go:174] setting up certificates
	I0815 01:18:36.025133   62901 provision.go:84] configureAuth start
	I0815 01:18:36.025145   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:18:36.025414   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:18:36.028031   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.028316   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:36.028353   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.028515   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:18:36.030741   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.031098   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:36.031124   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.031265   62901 provision.go:143] copyHostCerts
	I0815 01:18:36.031320   62901 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:18:36.031337   62901 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:18:36.031409   62901 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:18:36.031504   62901 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:18:36.031513   62901 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:18:36.031539   62901 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:18:36.031593   62901 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:18:36.031600   62901 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:18:36.031620   62901 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:18:36.031662   62901 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-390782 san=[127.0.0.1 192.168.50.21 localhost minikube old-k8s-version-390782]
	I0815 01:18:36.102875   62901 provision.go:177] copyRemoteCerts
	I0815 01:18:36.102929   62901 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:18:36.102950   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:18:36.105686   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.106074   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:36.106102   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.106300   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:18:36.106476   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:36.106629   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:18:36.106756   62901 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:18:36.186785   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:18:36.209964   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 01:18:36.232470   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:18:36.255656   62901 provision.go:87] duration metric: took 230.51002ms to configureAuth
	I0815 01:18:36.255685   62901 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:18:36.255891   62901 config.go:182] Loaded profile config "old-k8s-version-390782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:18:36.255956   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:18:36.258764   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.259167   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:36.259199   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.259344   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:18:36.259570   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:36.259736   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:36.259925   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:18:36.260089   62901 main.go:141] libmachine: Using SSH client type: native
	I0815 01:18:36.260261   62901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:18:36.260277   62901 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:18:36.507967   62901 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:18:36.507993   62901 main.go:141] libmachine: Checking connection to Docker...
	I0815 01:18:36.508003   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetURL
	I0815 01:18:36.509356   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using libvirt version 6000000
	I0815 01:18:36.511794   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.512191   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:36.512220   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.512395   62901 main.go:141] libmachine: Docker is up and running!
	I0815 01:18:36.512409   62901 main.go:141] libmachine: Reticulating splines...
	I0815 01:18:36.512417   62901 client.go:171] duration metric: took 21.161871181s to LocalClient.Create
	I0815 01:18:36.512441   62901 start.go:167] duration metric: took 21.161936031s to libmachine.API.Create "old-k8s-version-390782"
	I0815 01:18:36.512454   62901 start.go:293] postStartSetup for "old-k8s-version-390782" (driver="kvm2")
	I0815 01:18:36.512467   62901 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:18:36.512486   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:18:36.512776   62901 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:18:36.512801   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:18:36.514839   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.515154   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:36.515189   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.515355   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:18:36.515592   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:36.515733   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:18:36.515870   62901 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:18:36.599234   62901 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:18:36.603175   62901 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:18:36.603195   62901 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:18:36.603251   62901 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:18:36.603347   62901 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:18:36.603461   62901 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:18:36.613138   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:18:36.635882   62901 start.go:296] duration metric: took 123.410932ms for postStartSetup
	I0815 01:18:36.635928   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetConfigRaw
	I0815 01:18:36.636541   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:18:36.639425   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.639830   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:36.639868   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.640080   62901 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/config.json ...
	I0815 01:18:36.640250   62901 start.go:128] duration metric: took 21.307828503s to createHost
	I0815 01:18:36.640276   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:18:36.642954   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.643324   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:36.643347   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.643548   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:18:36.643760   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:36.643948   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:36.644096   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:18:36.644285   62901 main.go:141] libmachine: Using SSH client type: native
	I0815 01:18:36.644446   62901 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:18:36.644468   62901 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 01:18:36.741069   62901 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723684716.708936236
	
	I0815 01:18:36.741090   62901 fix.go:216] guest clock: 1723684716.708936236
	I0815 01:18:36.741097   62901 fix.go:229] Guest: 2024-08-15 01:18:36.708936236 +0000 UTC Remote: 2024-08-15 01:18:36.640259084 +0000 UTC m=+21.412530548 (delta=68.677152ms)
	I0815 01:18:36.741115   62901 fix.go:200] guest clock delta is within tolerance: 68.677152ms
	I0815 01:18:36.741119   62901 start.go:83] releasing machines lock for "old-k8s-version-390782", held for 21.408769434s
	I0815 01:18:36.741170   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:18:36.741448   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:18:36.743920   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.744266   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:36.744297   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.744446   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:18:36.744893   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:18:36.745074   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:18:36.745161   62901 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:18:36.745195   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:18:36.745267   62901 ssh_runner.go:195] Run: cat /version.json
	I0815 01:18:36.745284   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:18:36.747884   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.748163   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.748406   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:36.748440   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.748522   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:36.748553   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:18:36.748554   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:36.748822   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:18:36.748822   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:36.749034   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:18:36.749072   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:18:36.749236   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:18:36.749285   62901 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:18:36.749409   62901 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:18:36.825216   62901 ssh_runner.go:195] Run: systemctl --version
	I0815 01:18:36.855603   62901 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:18:37.020573   62901 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:18:37.028648   62901 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:18:37.028729   62901 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:18:37.045556   62901 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:18:37.045588   62901 start.go:495] detecting cgroup driver to use...
	I0815 01:18:37.045647   62901 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:18:37.063227   62901 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:18:37.078785   62901 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:18:37.078843   62901 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:18:37.093746   62901 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:18:37.108703   62901 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:18:37.235966   62901 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:18:37.409726   62901 docker.go:233] disabling docker service ...
	I0815 01:18:37.409800   62901 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:18:37.423173   62901 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:18:37.435573   62901 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:18:37.542629   62901 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:18:37.647079   62901 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:18:37.659692   62901 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:18:37.677559   62901 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 01:18:37.677647   62901 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:18:37.686788   62901 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:18:37.686849   62901 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:18:37.696197   62901 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:18:37.705195   62901 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:18:37.714113   62901 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:18:37.723448   62901 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:18:37.731778   62901 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:18:37.731828   62901 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:18:37.742733   62901 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:18:37.751643   62901 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:18:37.850633   62901 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:18:37.980286   62901 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:18:37.980365   62901 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:18:37.985603   62901 start.go:563] Will wait 60s for crictl version
	I0815 01:18:37.985653   62901 ssh_runner.go:195] Run: which crictl
	I0815 01:18:37.989325   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:18:38.025446   62901 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:18:38.025526   62901 ssh_runner.go:195] Run: crio --version
	I0815 01:18:38.052335   62901 ssh_runner.go:195] Run: crio --version
	I0815 01:18:38.079325   62901 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 01:18:38.080350   62901 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:18:38.082844   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:38.083283   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:18:29 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:18:38.083311   62901 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:18:38.083549   62901 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 01:18:38.087341   62901 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:18:38.098986   62901 kubeadm.go:883] updating cluster {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:18:38.099109   62901 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 01:18:38.099169   62901 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:18:38.129986   62901 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:18:38.130045   62901 ssh_runner.go:195] Run: which lz4
	I0815 01:18:38.133747   62901 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 01:18:38.137445   62901 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:18:38.137476   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 01:18:39.499103   62901 crio.go:462] duration metric: took 1.365389682s to copy over tarball
	I0815 01:18:39.499174   62901 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:18:42.032527   62901 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.533306372s)
	I0815 01:18:42.032570   62901 crio.go:469] duration metric: took 2.533442037s to extract the tarball
	I0815 01:18:42.032578   62901 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:18:42.076717   62901 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:18:42.124324   62901 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:18:42.124349   62901 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:18:42.124422   62901 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:18:42.124448   62901 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 01:18:42.124465   62901 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:18:42.124469   62901 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 01:18:42.124469   62901 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:18:42.124417   62901 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:18:42.124675   62901 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:18:42.124422   62901 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:18:42.126393   62901 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:18:42.126446   62901 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:18:42.126504   62901 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 01:18:42.126537   62901 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:18:42.126677   62901 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:18:42.126683   62901 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 01:18:42.126702   62901 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:18:42.126740   62901 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:18:42.334588   62901 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 01:18:42.373124   62901 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 01:18:42.373162   62901 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 01:18:42.373200   62901 ssh_runner.go:195] Run: which crictl
	I0815 01:18:42.376945   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:18:42.384186   62901 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 01:18:42.391317   62901 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 01:18:42.396156   62901 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:18:42.416184   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:18:42.417853   62901 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:18:42.420925   62901 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:18:42.457174   62901 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:18:42.499333   62901 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 01:18:42.499399   62901 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:18:42.499446   62901 ssh_runner.go:195] Run: which crictl
	I0815 01:18:42.515004   62901 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 01:18:42.515026   62901 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 01:18:42.515046   62901 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 01:18:42.515046   62901 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:18:42.515088   62901 ssh_runner.go:195] Run: which crictl
	I0815 01:18:42.515088   62901 ssh_runner.go:195] Run: which crictl
	I0815 01:18:42.518962   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:18:42.590012   62901 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 01:18:42.590056   62901 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:18:42.590082   62901 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 01:18:42.590112   62901 ssh_runner.go:195] Run: which crictl
	I0815 01:18:42.590112   62901 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:18:42.590198   62901 ssh_runner.go:195] Run: which crictl
	I0815 01:18:42.604756   62901 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 01:18:42.604806   62901 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:18:42.604809   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:18:42.604849   62901 ssh_runner.go:195] Run: which crictl
	I0815 01:18:42.604853   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:18:42.604890   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:18:42.604927   62901 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 01:18:42.604959   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:18:42.604975   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:18:42.690129   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:18:42.724791   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:18:42.754797   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:18:42.754799   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:18:42.754799   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:18:42.754809   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:18:42.796884   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:18:42.807108   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:18:42.864050   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:18:42.884593   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:18:42.884637   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:18:42.884713   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:18:42.967087   62901 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 01:18:42.967095   62901 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 01:18:42.970564   62901 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:18:43.012605   62901 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 01:18:43.012642   62901 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 01:18:43.012732   62901 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 01:18:43.012786   62901 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:18:43.145628   62901 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 01:18:43.145691   62901 cache_images.go:92] duration metric: took 1.021327666s to LoadCachedImages
	W0815 01:18:43.145751   62901 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0815 01:18:43.145776   62901 kubeadm.go:934] updating node { 192.168.50.21 8443 v1.20.0 crio true true} ...
	I0815 01:18:43.145896   62901 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-390782 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:18:43.145977   62901 ssh_runner.go:195] Run: crio config
	I0815 01:18:43.196228   62901 cni.go:84] Creating CNI manager for ""
	I0815 01:18:43.196247   62901 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:18:43.196256   62901 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:18:43.196275   62901 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.21 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-390782 NodeName:old-k8s-version-390782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 01:18:43.196390   62901 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-390782"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:18:43.196446   62901 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 01:18:43.206350   62901 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:18:43.206423   62901 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:18:43.215657   62901 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 01:18:43.230982   62901 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:18:43.249230   62901 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 01:18:43.267757   62901 ssh_runner.go:195] Run: grep 192.168.50.21	control-plane.minikube.internal$ /etc/hosts
	I0815 01:18:43.272402   62901 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:18:43.287670   62901 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:18:43.419547   62901 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:18:43.438464   62901 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782 for IP: 192.168.50.21
	I0815 01:18:43.438492   62901 certs.go:194] generating shared ca certs ...
	I0815 01:18:43.438512   62901 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:18:43.438694   62901 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:18:43.438756   62901 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:18:43.438770   62901 certs.go:256] generating profile certs ...
	I0815 01:18:43.438848   62901 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.key
	I0815 01:18:43.438875   62901 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt with IP's: []
	I0815 01:18:43.583816   62901 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt ...
	I0815 01:18:43.583845   62901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: {Name:mk3704cec091c432f9995f39282f09da868b376f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:18:43.584029   62901 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.key ...
	I0815 01:18:43.584049   62901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.key: {Name:mkc3166e5c07337fd4a57baf7a3b62c154e12547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:18:43.584164   62901 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key.d79afed6
	I0815 01:18:43.584187   62901 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt.d79afed6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.21]
	I0815 01:18:43.921387   62901 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt.d79afed6 ...
	I0815 01:18:43.921416   62901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt.d79afed6: {Name:mk7d4094a83796263013154c0cb821537c264b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:18:43.921609   62901 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key.d79afed6 ...
	I0815 01:18:43.921631   62901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key.d79afed6: {Name:mk4b03b753cd11d09dc2d3d4898fd75cb5e6dbb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:18:43.921733   62901 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt.d79afed6 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt
	I0815 01:18:43.921808   62901 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key.d79afed6 -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key
	I0815 01:18:43.921859   62901 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key
	I0815 01:18:43.921875   62901 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.crt with IP's: []
	I0815 01:18:44.170034   62901 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.crt ...
	I0815 01:18:44.170059   62901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.crt: {Name:mk4c373fce31688a292a12dc39a7fd0454748c01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:18:44.170216   62901 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key ...
	I0815 01:18:44.170229   62901 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key: {Name:mk5170e2a41af876693ef52ec4446c63b6968e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:18:44.170405   62901 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:18:44.170441   62901 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:18:44.170448   62901 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:18:44.170469   62901 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:18:44.170493   62901 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:18:44.170514   62901 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:18:44.170551   62901 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:18:44.171103   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:18:44.198245   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:18:44.224553   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:18:44.257043   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:18:44.292022   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 01:18:44.318974   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:18:44.350461   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:18:44.382990   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:18:44.405719   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:18:44.429283   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:18:44.452552   62901 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:18:44.475027   62901 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:18:44.490851   62901 ssh_runner.go:195] Run: openssl version
	I0815 01:18:44.496900   62901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:18:44.507739   62901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:18:44.512174   62901 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:18:44.512228   62901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:18:44.518864   62901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:18:44.529056   62901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:18:44.538684   62901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:18:44.542941   62901 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:18:44.543002   62901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:18:44.548393   62901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:18:44.558468   62901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:18:44.568385   62901 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:18:44.572941   62901 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:18:44.573000   62901 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:18:44.579115   62901 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:18:44.589228   62901 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:18:44.592911   62901 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 01:18:44.592960   62901 kubeadm.go:392] StartCluster: {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:18:44.593023   62901 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:18:44.593087   62901 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:18:44.633019   62901 cri.go:89] found id: ""
	I0815 01:18:44.633105   62901 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:18:44.643911   62901 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:18:44.653552   62901 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:18:44.663792   62901 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:18:44.663809   62901 kubeadm.go:157] found existing configuration files:
	
	I0815 01:18:44.663852   62901 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:18:44.672407   62901 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:18:44.672467   62901 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:18:44.681471   62901 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:18:44.690216   62901 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:18:44.690270   62901 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:18:44.698929   62901 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:18:44.707217   62901 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:18:44.707269   62901 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:18:44.716108   62901 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:18:44.724552   62901 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:18:44.724604   62901 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:18:44.733134   62901 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:18:45.008366   62901 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:20:42.831644   62901 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:20:42.831740   62901 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:20:42.833205   62901 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:20:42.833285   62901 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:20:42.833397   62901 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:20:42.833508   62901 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:20:42.833647   62901 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:20:42.833750   62901 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:20:42.835471   62901 out.go:204]   - Generating certificates and keys ...
	I0815 01:20:42.835544   62901 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:20:42.835602   62901 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:20:42.835682   62901 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 01:20:42.835764   62901 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 01:20:42.835839   62901 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 01:20:42.835903   62901 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 01:20:42.835970   62901 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 01:20:42.836085   62901 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-390782] and IPs [192.168.50.21 127.0.0.1 ::1]
	I0815 01:20:42.836161   62901 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 01:20:42.836327   62901 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-390782] and IPs [192.168.50.21 127.0.0.1 ::1]
	I0815 01:20:42.836428   62901 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 01:20:42.836513   62901 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 01:20:42.836584   62901 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 01:20:42.836668   62901 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:20:42.836748   62901 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:20:42.836826   62901 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:20:42.836921   62901 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:20:42.836987   62901 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:20:42.837070   62901 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:20:42.837139   62901 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:20:42.837186   62901 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:20:42.837249   62901 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:20:42.839455   62901 out.go:204]   - Booting up control plane ...
	I0815 01:20:42.839541   62901 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:20:42.839624   62901 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:20:42.839683   62901 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:20:42.839758   62901 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:20:42.839960   62901 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:20:42.840029   62901 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:20:42.840110   62901 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:20:42.840320   62901 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:20:42.840428   62901 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:20:42.840630   62901 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:20:42.840740   62901 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:20:42.840933   62901 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:20:42.841024   62901 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:20:42.841210   62901 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:20:42.841304   62901 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:20:42.841494   62901 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:20:42.841506   62901 kubeadm.go:310] 
	I0815 01:20:42.841537   62901 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:20:42.841587   62901 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:20:42.841594   62901 kubeadm.go:310] 
	I0815 01:20:42.841630   62901 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:20:42.841687   62901 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:20:42.841831   62901 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:20:42.841843   62901 kubeadm.go:310] 
	I0815 01:20:42.841993   62901 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:20:42.842043   62901 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:20:42.842076   62901 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:20:42.842083   62901 kubeadm.go:310] 
	I0815 01:20:42.842187   62901 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:20:42.842291   62901 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:20:42.842303   62901 kubeadm.go:310] 
	I0815 01:20:42.842435   62901 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:20:42.842547   62901 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:20:42.842630   62901 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:20:42.842703   62901 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:20:42.842743   62901 kubeadm.go:310] 
	W0815 01:20:42.842839   62901 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-390782] and IPs [192.168.50.21 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-390782] and IPs [192.168.50.21 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-390782] and IPs [192.168.50.21 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-390782] and IPs [192.168.50.21 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 01:20:42.842877   62901 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:20:43.511675   62901 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:20:43.525004   62901 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:20:43.534682   62901 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:20:43.534712   62901 kubeadm.go:157] found existing configuration files:
	
	I0815 01:20:43.534765   62901 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:20:43.543599   62901 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:20:43.543658   62901 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:20:43.552598   62901 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:20:43.561162   62901 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:20:43.561215   62901 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:20:43.570098   62901 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:20:43.578490   62901 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:20:43.578554   62901 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:20:43.587163   62901 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:20:43.595686   62901 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:20:43.595740   62901 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:20:43.604408   62901 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:20:43.672454   62901 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:20:43.672646   62901 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:20:43.810398   62901 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:20:43.810505   62901 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:20:43.810607   62901 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:20:43.977623   62901 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:20:43.979610   62901 out.go:204]   - Generating certificates and keys ...
	I0815 01:20:43.979708   62901 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:20:43.979799   62901 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:20:43.979872   62901 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:20:43.979935   62901 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:20:43.980025   62901 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:20:43.980131   62901 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:20:43.980582   62901 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:20:43.981233   62901 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:20:43.981910   62901 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:20:43.982604   62901 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:20:43.982860   62901 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:20:43.982922   62901 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:20:44.115044   62901 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:20:44.261201   62901 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:20:44.378138   62901 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:20:44.531909   62901 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:20:44.545789   62901 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:20:44.547652   62901 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:20:44.547785   62901 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:20:44.670260   62901 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:20:44.672012   62901 out.go:204]   - Booting up control plane ...
	I0815 01:20:44.672110   62901 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:20:44.676684   62901 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:20:44.677932   62901 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:20:44.678789   62901 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:20:44.680952   62901 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:21:24.678853   62901 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:21:24.679210   62901 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:21:24.679436   62901 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:21:29.679634   62901 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:21:29.679864   62901 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:21:39.679775   62901 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:21:39.679946   62901 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:21:59.680872   62901 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:21:59.681136   62901 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:22:39.683053   62901 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:22:39.683364   62901 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:22:39.683390   62901 kubeadm.go:310] 
	I0815 01:22:39.683446   62901 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:22:39.683529   62901 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:22:39.683550   62901 kubeadm.go:310] 
	I0815 01:22:39.683593   62901 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:22:39.683643   62901 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:22:39.683774   62901 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:22:39.683799   62901 kubeadm.go:310] 
	I0815 01:22:39.683936   62901 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:22:39.683994   62901 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:22:39.684036   62901 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:22:39.684047   62901 kubeadm.go:310] 
	I0815 01:22:39.684178   62901 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:22:39.684328   62901 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:22:39.684345   62901 kubeadm.go:310] 
	I0815 01:22:39.684489   62901 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:22:39.684605   62901 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:22:39.684733   62901 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:22:39.684837   62901 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:22:39.684848   62901 kubeadm.go:310] 
	I0815 01:22:39.685395   62901 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:22:39.685507   62901 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:22:39.685618   62901 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:22:39.685715   62901 kubeadm.go:394] duration metric: took 3m55.092757137s to StartCluster
	I0815 01:22:39.685775   62901 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:22:39.685847   62901 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:22:39.729020   62901 cri.go:89] found id: ""
	I0815 01:22:39.729050   62901 logs.go:276] 0 containers: []
	W0815 01:22:39.729061   62901 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:22:39.729069   62901 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:22:39.729139   62901 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:22:39.765245   62901 cri.go:89] found id: ""
	I0815 01:22:39.765273   62901 logs.go:276] 0 containers: []
	W0815 01:22:39.765284   62901 logs.go:278] No container was found matching "etcd"
	I0815 01:22:39.765291   62901 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:22:39.765359   62901 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:22:39.799416   62901 cri.go:89] found id: ""
	I0815 01:22:39.799445   62901 logs.go:276] 0 containers: []
	W0815 01:22:39.799455   62901 logs.go:278] No container was found matching "coredns"
	I0815 01:22:39.799462   62901 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:22:39.799522   62901 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:22:39.836454   62901 cri.go:89] found id: ""
	I0815 01:22:39.836488   62901 logs.go:276] 0 containers: []
	W0815 01:22:39.836497   62901 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:22:39.836504   62901 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:22:39.836555   62901 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:22:39.891014   62901 cri.go:89] found id: ""
	I0815 01:22:39.891041   62901 logs.go:276] 0 containers: []
	W0815 01:22:39.891051   62901 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:22:39.891059   62901 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:22:39.891111   62901 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:22:39.939689   62901 cri.go:89] found id: ""
	I0815 01:22:39.939711   62901 logs.go:276] 0 containers: []
	W0815 01:22:39.939721   62901 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:22:39.939729   62901 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:22:39.939789   62901 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:22:39.980710   62901 cri.go:89] found id: ""
	I0815 01:22:39.980742   62901 logs.go:276] 0 containers: []
	W0815 01:22:39.980753   62901 logs.go:278] No container was found matching "kindnet"
	I0815 01:22:39.980764   62901 logs.go:123] Gathering logs for kubelet ...
	I0815 01:22:39.980779   62901 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:22:40.032558   62901 logs.go:123] Gathering logs for dmesg ...
	I0815 01:22:40.032590   62901 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:22:40.049782   62901 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:22:40.049810   62901 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:22:40.160450   62901 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:22:40.160470   62901 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:22:40.160486   62901 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:22:40.280277   62901 logs.go:123] Gathering logs for container status ...
	I0815 01:22:40.280312   62901 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0815 01:22:40.329871   62901 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 01:22:40.329925   62901 out.go:239] * 
	* 
	W0815 01:22:40.329986   62901 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:22:40.330010   62901 out.go:239] * 
	* 
	W0815 01:22:40.330906   62901 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:22:40.333952   62901 out.go:177] 
	W0815 01:22:40.335240   62901 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:22:40.335318   62901 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 01:22:40.335352   62901 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 01:22:40.336818   62901 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-390782 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 6 (226.193919ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:22:40.617173   65912 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-390782" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-390782" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (265.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.95s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-064537 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-064537 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.156093624s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-064537] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-064537" primary control-plane node in "pause-064537" cluster
	* Updating the running kvm2 "pause-064537" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-064537" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:18:27.781757   63084 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:18:27.782133   63084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:18:27.782156   63084 out.go:304] Setting ErrFile to fd 2...
	I0815 01:18:27.782163   63084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:18:27.782490   63084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:18:27.783042   63084 out.go:298] Setting JSON to false
	I0815 01:18:27.783965   63084 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7253,"bootTime":1723677455,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:18:27.784017   63084 start.go:139] virtualization: kvm guest
	I0815 01:18:27.786085   63084 out.go:177] * [pause-064537] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:18:27.787287   63084 notify.go:220] Checking for updates...
	I0815 01:18:27.787298   63084 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:18:27.788521   63084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:18:27.789786   63084 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:18:27.790893   63084 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:18:27.791921   63084 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:18:27.793084   63084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:18:27.794666   63084 config.go:182] Loaded profile config "pause-064537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:18:27.795215   63084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:18:27.795295   63084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:18:27.810605   63084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0815 01:18:27.811066   63084 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:18:27.811649   63084 main.go:141] libmachine: Using API Version  1
	I0815 01:18:27.811669   63084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:18:27.811960   63084 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:18:27.812110   63084 main.go:141] libmachine: (pause-064537) Calling .DriverName
	I0815 01:18:27.812317   63084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:18:27.812602   63084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:18:27.812692   63084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:18:27.827276   63084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43449
	I0815 01:18:27.827666   63084 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:18:27.828062   63084 main.go:141] libmachine: Using API Version  1
	I0815 01:18:27.828084   63084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:18:27.828427   63084 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:18:27.828597   63084 main.go:141] libmachine: (pause-064537) Calling .DriverName
	I0815 01:18:27.864067   63084 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 01:18:27.865380   63084 start.go:297] selected driver: kvm2
	I0815 01:18:27.865398   63084 start.go:901] validating driver "kvm2" against &{Name:pause-064537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-064537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:18:27.865593   63084 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:18:27.865982   63084 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:18:27.866052   63084 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:18:27.881100   63084 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:18:27.882014   63084 cni.go:84] Creating CNI manager for ""
	I0815 01:18:27.882118   63084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:18:27.882221   63084 start.go:340] cluster config:
	{Name:pause-064537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-064537 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:18:27.882406   63084 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:18:27.884330   63084 out.go:177] * Starting "pause-064537" primary control-plane node in "pause-064537" cluster
	I0815 01:18:27.885583   63084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:18:27.885619   63084 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:18:27.885629   63084 cache.go:56] Caching tarball of preloaded images
	I0815 01:18:27.885710   63084 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:18:27.885724   63084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:18:27.885840   63084 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/pause-064537/config.json ...
	I0815 01:18:27.886018   63084 start.go:360] acquireMachinesLock for pause-064537: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:18:36.741209   63084 start.go:364] duration metric: took 8.855165837s to acquireMachinesLock for "pause-064537"
	I0815 01:18:36.741254   63084 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:18:36.741273   63084 fix.go:54] fixHost starting: 
	I0815 01:18:36.741636   63084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:18:36.741682   63084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:18:36.759438   63084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I0815 01:18:36.759971   63084 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:18:36.760470   63084 main.go:141] libmachine: Using API Version  1
	I0815 01:18:36.760490   63084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:18:36.760886   63084 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:18:36.761103   63084 main.go:141] libmachine: (pause-064537) Calling .DriverName
	I0815 01:18:36.761259   63084 main.go:141] libmachine: (pause-064537) Calling .GetState
	I0815 01:18:36.763296   63084 fix.go:112] recreateIfNeeded on pause-064537: state=Running err=<nil>
	W0815 01:18:36.763321   63084 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:18:36.765392   63084 out.go:177] * Updating the running kvm2 "pause-064537" VM ...
	I0815 01:18:36.766529   63084 machine.go:94] provisionDockerMachine start ...
	I0815 01:18:36.766561   63084 main.go:141] libmachine: (pause-064537) Calling .DriverName
	I0815 01:18:36.766784   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHHostname
	I0815 01:18:36.769764   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:36.770172   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:36.770197   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:36.770371   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHPort
	I0815 01:18:36.770561   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:36.770742   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:36.770870   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHUsername
	I0815 01:18:36.771071   63084 main.go:141] libmachine: Using SSH client type: native
	I0815 01:18:36.771296   63084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0815 01:18:36.771310   63084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:18:36.869173   63084 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-064537
	
	I0815 01:18:36.869204   63084 main.go:141] libmachine: (pause-064537) Calling .GetMachineName
	I0815 01:18:36.869433   63084 buildroot.go:166] provisioning hostname "pause-064537"
	I0815 01:18:36.869456   63084 main.go:141] libmachine: (pause-064537) Calling .GetMachineName
	I0815 01:18:36.869655   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHHostname
	I0815 01:18:36.872376   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:36.872769   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:36.872795   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:36.872935   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHPort
	I0815 01:18:36.873118   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:36.873270   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:36.873389   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHUsername
	I0815 01:18:36.873554   63084 main.go:141] libmachine: Using SSH client type: native
	I0815 01:18:36.873722   63084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0815 01:18:36.873733   63084 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-064537 && echo "pause-064537" | sudo tee /etc/hostname
	I0815 01:18:36.988936   63084 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-064537
	
	I0815 01:18:36.988966   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHHostname
	I0815 01:18:36.991874   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:36.992187   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:36.992230   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:36.992353   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHPort
	I0815 01:18:36.992544   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:36.992797   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:36.992998   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHUsername
	I0815 01:18:36.993257   63084 main.go:141] libmachine: Using SSH client type: native
	I0815 01:18:36.993453   63084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0815 01:18:36.993475   63084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-064537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-064537/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-064537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:18:37.101249   63084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:18:37.101279   63084 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:18:37.101337   63084 buildroot.go:174] setting up certificates
	I0815 01:18:37.101349   63084 provision.go:84] configureAuth start
	I0815 01:18:37.101363   63084 main.go:141] libmachine: (pause-064537) Calling .GetMachineName
	I0815 01:18:37.101655   63084 main.go:141] libmachine: (pause-064537) Calling .GetIP
	I0815 01:18:37.104627   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:37.105041   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:37.105069   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:37.105221   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHHostname
	I0815 01:18:37.107686   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:37.108043   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:37.108068   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:37.108191   63084 provision.go:143] copyHostCerts
	I0815 01:18:37.108266   63084 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:18:37.108284   63084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:18:37.108366   63084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:18:37.108546   63084 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:18:37.108559   63084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:18:37.108594   63084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:18:37.108719   63084 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:18:37.108726   63084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:18:37.108770   63084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:18:37.108830   63084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.pause-064537 san=[127.0.0.1 192.168.61.243 localhost minikube pause-064537]
	I0815 01:18:37.162308   63084 provision.go:177] copyRemoteCerts
	I0815 01:18:37.162381   63084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:18:37.162403   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHHostname
	I0815 01:18:37.165434   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:37.165836   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:37.165870   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:37.166060   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHPort
	I0815 01:18:37.166267   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:37.166438   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHUsername
	I0815 01:18:37.166632   63084 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/pause-064537/id_rsa Username:docker}
	I0815 01:18:37.246306   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 01:18:37.272177   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:18:37.296991   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:18:37.326183   63084 provision.go:87] duration metric: took 224.817323ms to configureAuth
	I0815 01:18:37.326218   63084 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:18:37.326503   63084 config.go:182] Loaded profile config "pause-064537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:18:37.326638   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHHostname
	I0815 01:18:37.329456   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:37.329883   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:37.329921   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:37.330111   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHPort
	I0815 01:18:37.330290   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:37.330473   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:37.330592   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHUsername
	I0815 01:18:37.330769   63084 main.go:141] libmachine: Using SSH client type: native
	I0815 01:18:37.330940   63084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0815 01:18:37.330954   63084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:18:42.878195   63084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:18:42.878223   63084 machine.go:97] duration metric: took 6.111670388s to provisionDockerMachine
	I0815 01:18:42.878235   63084 start.go:293] postStartSetup for "pause-064537" (driver="kvm2")
	I0815 01:18:42.878247   63084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:18:42.878267   63084 main.go:141] libmachine: (pause-064537) Calling .DriverName
	I0815 01:18:42.878646   63084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:18:42.878678   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHHostname
	I0815 01:18:42.881775   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:42.882170   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:42.882197   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:42.882334   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHPort
	I0815 01:18:42.882539   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:42.882674   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHUsername
	I0815 01:18:42.882822   63084 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/pause-064537/id_rsa Username:docker}
	I0815 01:18:42.968019   63084 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:18:42.971980   63084 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:18:42.972003   63084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:18:42.972064   63084 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:18:42.972167   63084 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:18:42.972298   63084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:18:42.982281   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:18:43.012677   63084 start.go:296] duration metric: took 134.429878ms for postStartSetup
	I0815 01:18:43.012720   63084 fix.go:56] duration metric: took 6.2714566s for fixHost
	I0815 01:18:43.012744   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHHostname
	I0815 01:18:43.016442   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:43.016882   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:43.016913   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:43.017117   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHPort
	I0815 01:18:43.017417   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:43.017614   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:43.017787   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHUsername
	I0815 01:18:43.018006   63084 main.go:141] libmachine: Using SSH client type: native
	I0815 01:18:43.018223   63084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0815 01:18:43.018237   63084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 01:18:43.121219   63084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723684723.109308844
	
	I0815 01:18:43.121246   63084 fix.go:216] guest clock: 1723684723.109308844
	I0815 01:18:43.121257   63084 fix.go:229] Guest: 2024-08-15 01:18:43.109308844 +0000 UTC Remote: 2024-08-15 01:18:43.012725659 +0000 UTC m=+15.264089652 (delta=96.583185ms)
	I0815 01:18:43.121301   63084 fix.go:200] guest clock delta is within tolerance: 96.583185ms
	I0815 01:18:43.121315   63084 start.go:83] releasing machines lock for "pause-064537", held for 6.380080064s
	I0815 01:18:43.121347   63084 main.go:141] libmachine: (pause-064537) Calling .DriverName
	I0815 01:18:43.121653   63084 main.go:141] libmachine: (pause-064537) Calling .GetIP
	I0815 01:18:43.124231   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:43.124588   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:43.124613   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:43.124787   63084 main.go:141] libmachine: (pause-064537) Calling .DriverName
	I0815 01:18:43.125296   63084 main.go:141] libmachine: (pause-064537) Calling .DriverName
	I0815 01:18:43.125466   63084 main.go:141] libmachine: (pause-064537) Calling .DriverName
	I0815 01:18:43.125554   63084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:18:43.125604   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHHostname
	I0815 01:18:43.125689   63084 ssh_runner.go:195] Run: cat /version.json
	I0815 01:18:43.125714   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHHostname
	I0815 01:18:43.128298   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:43.128610   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:43.128704   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:43.128730   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:43.128846   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHPort
	I0815 01:18:43.129016   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:43.129063   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:43.129087   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:43.129176   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHUsername
	I0815 01:18:43.129254   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHPort
	I0815 01:18:43.129332   63084 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/pause-064537/id_rsa Username:docker}
	I0815 01:18:43.129399   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHKeyPath
	I0815 01:18:43.129498   63084 main.go:141] libmachine: (pause-064537) Calling .GetSSHUsername
	I0815 01:18:43.129614   63084 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/pause-064537/id_rsa Username:docker}
	I0815 01:18:43.240710   63084 ssh_runner.go:195] Run: systemctl --version
	I0815 01:18:43.249387   63084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:18:43.408184   63084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:18:43.416772   63084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:18:43.416850   63084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:18:43.426014   63084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 01:18:43.426043   63084 start.go:495] detecting cgroup driver to use...
	I0815 01:18:43.426117   63084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:18:43.444860   63084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:18:43.463430   63084 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:18:43.463497   63084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:18:43.482570   63084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:18:43.501040   63084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:18:43.650659   63084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:18:43.774810   63084 docker.go:233] disabling docker service ...
	I0815 01:18:43.774893   63084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:18:43.792054   63084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:18:43.805672   63084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:18:43.933499   63084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:18:44.061259   63084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:18:44.076233   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:18:44.094835   63084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:18:44.094921   63084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:18:44.104783   63084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:18:44.104849   63084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:18:44.114973   63084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:18:44.124615   63084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:18:44.135279   63084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:18:44.145800   63084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:18:44.155775   63084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:18:44.166366   63084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:18:44.176676   63084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:18:44.186359   63084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:18:44.196968   63084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:18:44.346543   63084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:18:45.599100   63084 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.252519627s)
	I0815 01:18:45.599135   63084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:18:45.599188   63084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:18:45.603891   63084 start.go:563] Will wait 60s for crictl version
	I0815 01:18:45.603943   63084 ssh_runner.go:195] Run: which crictl
	I0815 01:18:45.607602   63084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:18:45.646295   63084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:18:45.646393   63084 ssh_runner.go:195] Run: crio --version
	I0815 01:18:45.676219   63084 ssh_runner.go:195] Run: crio --version
	I0815 01:18:45.709092   63084 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:18:45.710245   63084 main.go:141] libmachine: (pause-064537) Calling .GetIP
	I0815 01:18:45.712873   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:45.713292   63084 main.go:141] libmachine: (pause-064537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:b7:0e", ip: ""} in network mk-pause-064537: {Iface:virbr3 ExpiryTime:2024-08-15 02:17:21 +0000 UTC Type:0 Mac:52:54:00:3c:b7:0e Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:pause-064537 Clientid:01:52:54:00:3c:b7:0e}
	I0815 01:18:45.713320   63084 main.go:141] libmachine: (pause-064537) DBG | domain pause-064537 has defined IP address 192.168.61.243 and MAC address 52:54:00:3c:b7:0e in network mk-pause-064537
	I0815 01:18:45.713530   63084 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 01:18:45.717792   63084 kubeadm.go:883] updating cluster {Name:pause-064537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-064537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:18:45.717951   63084 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:18:45.718009   63084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:18:45.765808   63084 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:18:45.765832   63084 crio.go:433] Images already preloaded, skipping extraction
	I0815 01:18:45.765885   63084 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:18:45.798675   63084 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:18:45.798697   63084 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:18:45.798706   63084 kubeadm.go:934] updating node { 192.168.61.243 8443 v1.31.0 crio true true} ...
	I0815 01:18:45.798835   63084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-064537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-064537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:18:45.798915   63084 ssh_runner.go:195] Run: crio config
	I0815 01:18:45.842433   63084 cni.go:84] Creating CNI manager for ""
	I0815 01:18:45.842458   63084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:18:45.842474   63084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:18:45.842494   63084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-064537 NodeName:pause-064537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:18:45.842633   63084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-064537"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:18:45.842704   63084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:18:45.852449   63084 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:18:45.852514   63084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:18:45.862490   63084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0815 01:18:45.878242   63084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:18:45.894955   63084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0815 01:18:45.911628   63084 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0815 01:18:45.915228   63084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:18:46.126727   63084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:18:46.254856   63084 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/pause-064537 for IP: 192.168.61.243
	I0815 01:18:46.254881   63084 certs.go:194] generating shared ca certs ...
	I0815 01:18:46.254895   63084 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:18:46.255028   63084 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:18:46.255107   63084 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:18:46.255119   63084 certs.go:256] generating profile certs ...
	I0815 01:18:46.255211   63084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/pause-064537/client.key
	I0815 01:18:46.255275   63084 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/pause-064537/apiserver.key.91ec1661
	I0815 01:18:46.255307   63084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/pause-064537/proxy-client.key
	I0815 01:18:46.255411   63084 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:18:46.255437   63084 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:18:46.255445   63084 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:18:46.255484   63084 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:18:46.255514   63084 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:18:46.255534   63084 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:18:46.255587   63084 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:18:46.256212   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:18:46.341052   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:18:46.480040   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:18:46.555112   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:18:46.647737   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/pause-064537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 01:18:46.745227   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/pause-064537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 01:18:46.796699   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/pause-064537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:18:46.888274   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/pause-064537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:18:46.930946   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:18:46.978759   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:18:47.017483   63084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:18:47.050910   63084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:18:47.078385   63084 ssh_runner.go:195] Run: openssl version
	I0815 01:18:47.086628   63084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:18:47.103246   63084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:18:47.109291   63084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:18:47.109354   63084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:18:47.118827   63084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:18:47.165811   63084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:18:47.244560   63084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:18:47.255371   63084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:18:47.255426   63084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:18:47.273725   63084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:18:47.301721   63084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:18:47.319056   63084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:18:47.332627   63084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:18:47.332710   63084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:18:47.340883   63084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:18:47.361539   63084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:18:47.368987   63084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:18:47.380103   63084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:18:47.387425   63084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:18:47.396450   63084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:18:47.409134   63084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:18:47.423655   63084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:18:47.430470   63084 kubeadm.go:392] StartCluster: {Name:pause-064537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-064537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:18:47.430618   63084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:18:47.430676   63084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:18:47.496304   63084 cri.go:89] found id: "b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2"
	I0815 01:18:47.496330   63084 cri.go:89] found id: "65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d"
	I0815 01:18:47.496336   63084 cri.go:89] found id: "5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da"
	I0815 01:18:47.496340   63084 cri.go:89] found id: "2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657"
	I0815 01:18:47.496344   63084 cri.go:89] found id: "e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9"
	I0815 01:18:47.496349   63084 cri.go:89] found id: "7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2"
	I0815 01:18:47.496353   63084 cri.go:89] found id: "618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a"
	I0815 01:18:47.496356   63084 cri.go:89] found id: "75c4acd722339192c314d4a56d694984a3727fcea92ca1a0453ca7fae22aa897"
	I0815 01:18:47.496361   63084 cri.go:89] found id: "5dd791247d2d708d075f10a649268a8316ff678fb575ad7bd25e9bbed88908ed"
	I0815 01:18:47.496378   63084 cri.go:89] found id: "3e21f84a1ba01a70ec79c115be5f44eea08fb52aaa05a1594812609ebeae4f27"
	I0815 01:18:47.496383   63084 cri.go:89] found id: "39100551d498721e0372891bb0b5176720c0315c8f21db32ad70ce2cf9fdf53f"
	I0815 01:18:47.496392   63084 cri.go:89] found id: ""
	I0815 01:18:47.496443   63084 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-064537 -n pause-064537
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-064537 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-064537 logs -n 25: (1.296363125s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-339919             | running-upgrade-339919    | jenkins | v1.33.1 | 15 Aug 24 01:14 UTC | 15 Aug 24 01:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:14 UTC | 15 Aug 24 01:14 UTC |
	| start   | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:14 UTC | 15 Aug 24 01:15 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-312183 sudo           | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:15 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-284326 stop           | minikube                  | jenkins | v1.26.0 | 15 Aug 24 01:15 UTC | 15 Aug 24 01:15 UTC |
	| start   | -p stopped-upgrade-284326             | stopped-upgrade-284326    | jenkins | v1.33.1 | 15 Aug 24 01:15 UTC | 15 Aug 24 01:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	| start   | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-339919             | running-upgrade-339919    | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	| start   | -p cert-expiration-131152             | cert-expiration-131152    | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-284326             | stopped-upgrade-284326    | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	| start   | -p force-systemd-flag-221548          | force-systemd-flag-221548 | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:17 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-312183 sudo           | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	| start   | -p pause-064537 --memory=2048         | pause-064537              | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:18 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-221548 ssh cat     | force-systemd-flag-221548 | jenkins | v1.33.1 | 15 Aug 24 01:17 UTC | 15 Aug 24 01:17 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-221548          | force-systemd-flag-221548 | jenkins | v1.33.1 | 15 Aug 24 01:17 UTC | 15 Aug 24 01:17 UTC |
	| start   | -p cert-options-411164                | cert-options-411164       | jenkins | v1.33.1 | 15 Aug 24 01:17 UTC | 15 Aug 24 01:18 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-411164 ssh               | cert-options-411164       | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:18 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-411164 -- sudo        | cert-options-411164       | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:18 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-411164                | cert-options-411164       | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:18 UTC |
	| start   | -p old-k8s-version-390782             | old-k8s-version-390782    | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p pause-064537                       | pause-064537              | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:19 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-146394          | kubernetes-upgrade-146394 | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:18 UTC |
	| start   | -p kubernetes-upgrade-146394          | kubernetes-upgrade-146394 | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:18:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:18:47.833617   63299 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:18:47.833743   63299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:18:47.833754   63299 out.go:304] Setting ErrFile to fd 2...
	I0815 01:18:47.833767   63299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:18:47.833930   63299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:18:47.834422   63299 out.go:298] Setting JSON to false
	I0815 01:18:47.835362   63299 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7273,"bootTime":1723677455,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:18:47.835425   63299 start.go:139] virtualization: kvm guest
	I0815 01:18:47.837722   63299 out.go:177] * [kubernetes-upgrade-146394] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:18:47.839188   63299 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:18:47.839180   63299 notify.go:220] Checking for updates...
	I0815 01:18:47.840644   63299 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:18:47.842289   63299 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:18:47.843671   63299 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:18:47.844792   63299 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:18:47.845922   63299 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:18:47.847179   63299 config.go:182] Loaded profile config "kubernetes-upgrade-146394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:18:47.847687   63299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:18:47.847751   63299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:18:47.864406   63299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0815 01:18:47.864885   63299 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:18:47.865445   63299 main.go:141] libmachine: Using API Version  1
	I0815 01:18:47.865473   63299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:18:47.865851   63299 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:18:47.866089   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:18:47.866362   63299 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:18:47.866799   63299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:18:47.866848   63299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:18:47.883108   63299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I0815 01:18:47.883539   63299 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:18:47.884058   63299 main.go:141] libmachine: Using API Version  1
	I0815 01:18:47.884092   63299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:18:47.884420   63299 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:18:47.884607   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:18:47.921712   63299 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 01:18:47.922869   63299 start.go:297] selected driver: kvm2
	I0815 01:18:47.922888   63299 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:18:47.923010   63299 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:18:47.923782   63299 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:18:47.923842   63299 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:18:47.938927   63299 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:18:47.939433   63299 cni.go:84] Creating CNI manager for ""
	I0815 01:18:47.939453   63299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:18:47.939513   63299 start.go:340] cluster config:
	{Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-146394 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:18:47.939652   63299 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:18:47.941277   63299 out.go:177] * Starting "kubernetes-upgrade-146394" primary control-plane node in "kubernetes-upgrade-146394" cluster
	I0815 01:18:47.942256   63299 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:18:47.942294   63299 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:18:47.942304   63299 cache.go:56] Caching tarball of preloaded images
	I0815 01:18:47.942396   63299 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:18:47.942414   63299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:18:47.942523   63299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/config.json ...
	I0815 01:18:47.942768   63299 start.go:360] acquireMachinesLock for kubernetes-upgrade-146394: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:18:47.942848   63299 start.go:364] duration metric: took 43.534µs to acquireMachinesLock for "kubernetes-upgrade-146394"
	I0815 01:18:47.942870   63299 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:18:47.942885   63299 fix.go:54] fixHost starting: 
	I0815 01:18:47.943275   63299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:18:47.943314   63299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:18:47.957726   63299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I0815 01:18:47.958157   63299 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:18:47.958721   63299 main.go:141] libmachine: Using API Version  1
	I0815 01:18:47.958749   63299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:18:47.959086   63299 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:18:47.959309   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:18:47.959464   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetState
	I0815 01:18:47.961146   63299 fix.go:112] recreateIfNeeded on kubernetes-upgrade-146394: state=Stopped err=<nil>
	I0815 01:18:47.961176   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	W0815 01:18:47.961322   63299 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:18:47.962996   63299 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-146394" ...
	I0815 01:18:47.963965   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .Start
	I0815 01:18:47.964134   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Ensuring networks are active...
	I0815 01:18:47.964862   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Ensuring network default is active
	I0815 01:18:47.965271   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Ensuring network mk-kubernetes-upgrade-146394 is active
	I0815 01:18:47.965715   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Getting domain xml...
	I0815 01:18:47.966554   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Creating domain...
	I0815 01:18:49.216233   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Waiting to get IP...
	I0815 01:18:49.217172   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:49.217678   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:49.217750   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:49.217644   63332 retry.go:31] will retry after 190.327372ms: waiting for machine to come up
	I0815 01:18:49.410295   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:49.410884   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:49.410904   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:49.410846   63332 retry.go:31] will retry after 290.652704ms: waiting for machine to come up
	I0815 01:18:49.703506   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:49.704057   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:49.704083   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:49.703988   63332 retry.go:31] will retry after 374.905949ms: waiting for machine to come up
	I0815 01:18:50.080861   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:50.081454   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:50.081518   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:50.081413   63332 retry.go:31] will retry after 380.337818ms: waiting for machine to come up
	I0815 01:18:50.462794   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:50.463420   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:50.463444   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:50.463364   63332 retry.go:31] will retry after 697.728389ms: waiting for machine to come up
	I0815 01:18:51.162604   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:51.163137   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:51.163162   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:51.163080   63332 retry.go:31] will retry after 949.275888ms: waiting for machine to come up
	I0815 01:18:52.113648   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:52.114051   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:52.114072   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:52.114011   63332 retry.go:31] will retry after 1.172343668s: waiting for machine to come up
	I0815 01:18:53.287530   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:53.288034   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:53.288059   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:53.287992   63332 retry.go:31] will retry after 1.308726981s: waiting for machine to come up
	I0815 01:18:54.598276   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:54.598775   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:54.598802   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:54.598735   63332 retry.go:31] will retry after 1.20091007s: waiting for machine to come up
	I0815 01:18:55.800847   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:55.801341   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:55.801369   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:55.801280   63332 retry.go:31] will retry after 2.080792306s: waiting for machine to come up
	I0815 01:19:00.938095   63084 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2 65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d 5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da 2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657 e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9 7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2 618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a 75c4acd722339192c314d4a56d694984a3727fcea92ca1a0453ca7fae22aa897 5dd791247d2d708d075f10a649268a8316ff678fb575ad7bd25e9bbed88908ed 3e21f84a1ba01a70ec79c115be5f44eea08fb52aaa05a1594812609ebeae4f27 39100551d498721e0372891bb0b5176720c0315c8f21db32ad70ce2cf9fdf53f: (13.291096705s)
	W0815 01:19:00.938173   63084 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2 65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d 5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da 2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657 e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9 7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2 618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a 75c4acd722339192c314d4a56d694984a3727fcea92ca1a0453ca7fae22aa897 5dd791247d2d708d075f10a649268a8316ff678fb575ad7bd25e9bbed88908ed 3e21f84a1ba01a70ec79c115be5f44eea08fb52aaa05a1594812609ebeae4f27 39100551d498721e0372891bb0b5176720c0315c8f21db32ad70ce2cf9fdf53f: Process exited with status 1
	stdout:
	b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2
	65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d
	5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da
	2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657
	e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9
	
	stderr:
	E0815 01:19:00.924186    3031 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2\": container with ID starting with 7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2 not found: ID does not exist" containerID="7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2"
	time="2024-08-15T01:19:00Z" level=fatal msg="stopping the container \"7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2\": rpc error: code = NotFound desc = could not find container \"7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2\": container with ID starting with 7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2 not found: ID does not exist"
	I0815 01:19:00.938239   63084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:19:00.974389   63084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:19:00.984264   63084 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug 15 01:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Aug 15 01:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Aug 15 01:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Aug 15 01:17 /etc/kubernetes/scheduler.conf
	
	I0815 01:19:00.984323   63084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:19:00.993015   63084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:19:01.001572   63084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:19:01.010604   63084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:19:01.010654   63084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:19:01.019386   63084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:19:01.027809   63084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:19:01.027874   63084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:19:01.036642   63084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:19:01.046393   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:01.099935   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:01.807209   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:02.019534   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:02.087772   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:02.178235   63084 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:19:02.178327   63084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:02.679045   63084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:18:57.884306   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:57.884785   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:57.884813   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:57.884736   63332 retry.go:31] will retry after 2.214242479s: waiting for machine to come up
	I0815 01:19:00.101595   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:00.102182   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:19:00.102211   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:19:00.102130   63332 retry.go:31] will retry after 2.956379186s: waiting for machine to come up
	I0815 01:19:03.178516   63084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:03.192493   63084 api_server.go:72] duration metric: took 1.014279562s to wait for apiserver process to appear ...
	I0815 01:19:03.192523   63084 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:19:03.192540   63084 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0815 01:19:05.229477   63084 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:19:05.229503   63084 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:19:05.229516   63084 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0815 01:19:05.246854   63084 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:19:05.246876   63084 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:19:05.693117   63084 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0815 01:19:05.697653   63084 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:19:05.697689   63084 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:19:06.193048   63084 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0815 01:19:06.197520   63084 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:19:06.197551   63084 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:19:06.692751   63084 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0815 01:19:06.696674   63084 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0815 01:19:06.702744   63084 api_server.go:141] control plane version: v1.31.0
	I0815 01:19:06.702765   63084 api_server.go:131] duration metric: took 3.510235751s to wait for apiserver health ...
	I0815 01:19:06.702774   63084 cni.go:84] Creating CNI manager for ""
	I0815 01:19:06.702780   63084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:19:06.704841   63084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:19:06.705929   63084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:19:06.715945   63084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:19:06.732452   63084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:19:06.732520   63084 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 01:19:06.732544   63084 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 01:19:06.740922   63084 system_pods.go:59] 6 kube-system pods found
	I0815 01:19:06.740962   63084 system_pods.go:61] "coredns-6f6b679f8f-gh5hb" [c05c76ba-24ca-4a03-8e94-52391b4ab036] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:19:06.740976   63084 system_pods.go:61] "etcd-pause-064537" [8c39e488-2339-4b28-bf0f-e01e3fa55fc9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:19:06.740987   63084 system_pods.go:61] "kube-apiserver-pause-064537" [fc53227f-bae3-4591-aa7a-6646f81a49bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:19:06.741001   63084 system_pods.go:61] "kube-controller-manager-pause-064537" [1758ac28-2b2e-4f76-a3e8-0aa64241c05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:19:06.741008   63084 system_pods.go:61] "kube-proxy-jkgw5" [e749136f-57bd-41a0-aa1c-1d12c05445a4] Running
	I0815 01:19:06.741018   63084 system_pods.go:61] "kube-scheduler-pause-064537" [0fa69c33-02ff-497d-b53c-80e815733d40] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:19:06.741029   63084 system_pods.go:74] duration metric: took 8.558245ms to wait for pod list to return data ...
	I0815 01:19:06.741039   63084 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:19:06.745233   63084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:19:06.745262   63084 node_conditions.go:123] node cpu capacity is 2
	I0815 01:19:06.745272   63084 node_conditions.go:105] duration metric: took 4.226926ms to run NodePressure ...
	I0815 01:19:06.745288   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:07.002576   63084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:19:07.007579   63084 kubeadm.go:739] kubelet initialised
	I0815 01:19:07.007602   63084 kubeadm.go:740] duration metric: took 5.003617ms waiting for restarted kubelet to initialise ...
	I0815 01:19:07.007609   63084 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:19:07.011873   63084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:03.059540   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:03.059927   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:19:03.059955   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:19:03.059877   63332 retry.go:31] will retry after 4.353508843s: waiting for machine to come up
	I0815 01:19:07.418293   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.418802   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has current primary IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.418824   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Found IP for machine: 192.168.72.130
	I0815 01:19:07.418839   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Reserving static IP address...
	I0815 01:19:07.419270   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-146394", mac: "52:54:00:c0:3a:c8", ip: "192.168.72.130"} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.419315   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | skip adding static IP to network mk-kubernetes-upgrade-146394 - found existing host DHCP lease matching {name: "kubernetes-upgrade-146394", mac: "52:54:00:c0:3a:c8", ip: "192.168.72.130"}
	I0815 01:19:07.419335   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Reserved static IP address: 192.168.72.130
	I0815 01:19:07.419349   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Waiting for SSH to be available...
	I0815 01:19:07.419356   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Getting to WaitForSSH function...
	I0815 01:19:07.421472   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.421946   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.421973   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.422085   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Using SSH client type: external
	I0815 01:19:07.422105   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa (-rw-------)
	I0815 01:19:07.422130   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:19:07.422139   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | About to run SSH command:
	I0815 01:19:07.422148   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | exit 0
	I0815 01:19:07.548671   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | SSH cmd err, output: <nil>: 
	I0815 01:19:07.549030   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetConfigRaw
	I0815 01:19:07.549687   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:19:07.552155   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.552437   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.552464   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.552748   63299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/config.json ...
	I0815 01:19:07.552966   63299 machine.go:94] provisionDockerMachine start ...
	I0815 01:19:07.552985   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:19:07.553213   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:07.556009   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.556439   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.556469   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.556615   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:07.556826   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:07.557012   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:07.557197   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:07.557450   63299 main.go:141] libmachine: Using SSH client type: native
	I0815 01:19:07.557714   63299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:19:07.557728   63299 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:19:07.664550   63299 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:19:07.664580   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetMachineName
	I0815 01:19:07.664959   63299 buildroot.go:166] provisioning hostname "kubernetes-upgrade-146394"
	I0815 01:19:07.664984   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetMachineName
	I0815 01:19:07.665170   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:07.667696   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.668060   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.668093   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.668196   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:07.668377   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:07.668584   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:07.668761   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:07.668914   63299 main.go:141] libmachine: Using SSH client type: native
	I0815 01:19:07.669080   63299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:19:07.669097   63299 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-146394 && echo "kubernetes-upgrade-146394" | sudo tee /etc/hostname
	I0815 01:19:07.786879   63299 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-146394
	
	I0815 01:19:07.786907   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:07.789615   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.790001   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.790040   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.790246   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:07.790477   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:07.790637   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:07.790830   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:07.791002   63299 main.go:141] libmachine: Using SSH client type: native
	I0815 01:19:07.791193   63299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:19:07.791211   63299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-146394' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-146394/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-146394' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:19:07.904846   63299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:19:07.904872   63299 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:19:07.904896   63299 buildroot.go:174] setting up certificates
	I0815 01:19:07.904909   63299 provision.go:84] configureAuth start
	I0815 01:19:07.904921   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetMachineName
	I0815 01:19:07.905202   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:19:07.908466   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.908919   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.908961   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.909047   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:07.911366   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.911662   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.911690   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.911809   63299 provision.go:143] copyHostCerts
	I0815 01:19:07.911863   63299 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:19:07.911884   63299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:19:07.911955   63299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:19:07.912098   63299 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:19:07.912111   63299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:19:07.912141   63299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:19:07.912224   63299 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:19:07.912235   63299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:19:07.912264   63299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:19:07.912343   63299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-146394 san=[127.0.0.1 192.168.72.130 kubernetes-upgrade-146394 localhost minikube]
	I0815 01:19:08.089615   63299 provision.go:177] copyRemoteCerts
	I0815 01:19:08.089694   63299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:19:08.089731   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:08.092416   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.092805   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.092840   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.093010   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:08.093204   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.093366   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:08.093546   63299 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa Username:docker}
	I0815 01:19:08.178776   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:19:08.201276   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0815 01:19:08.223204   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:19:08.244961   63299 provision.go:87] duration metric: took 340.040199ms to configureAuth
	I0815 01:19:08.244989   63299 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:19:08.245211   63299 config.go:182] Loaded profile config "kubernetes-upgrade-146394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:19:08.245293   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:08.247759   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.248141   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.248183   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.248383   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:08.248539   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.248701   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.248818   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:08.249010   63299 main.go:141] libmachine: Using SSH client type: native
	I0815 01:19:08.249254   63299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:19:08.249283   63299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:19:08.510629   63299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:19:08.510659   63299 machine.go:97] duration metric: took 957.678971ms to provisionDockerMachine
	I0815 01:19:08.510675   63299 start.go:293] postStartSetup for "kubernetes-upgrade-146394" (driver="kvm2")
	I0815 01:19:08.510705   63299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:19:08.510739   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:19:08.511061   63299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:19:08.511088   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:08.514111   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.514688   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.514721   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.514879   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:08.515138   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.515338   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:08.515502   63299 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa Username:docker}
	I0815 01:19:08.603211   63299 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:19:08.608152   63299 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:19:08.608178   63299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:19:08.608244   63299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:19:08.608354   63299 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:19:08.608482   63299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:19:08.620189   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:19:08.643934   63299 start.go:296] duration metric: took 133.24574ms for postStartSetup
	I0815 01:19:08.643971   63299 fix.go:56] duration metric: took 20.701095201s for fixHost
	I0815 01:19:08.643989   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:08.647018   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.647369   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.647411   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.647529   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:08.647730   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.647904   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.648090   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:08.648298   63299 main.go:141] libmachine: Using SSH client type: native
	I0815 01:19:08.648524   63299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:19:08.648541   63299 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:19:08.753219   63299 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723684748.714418371
	
	I0815 01:19:08.753250   63299 fix.go:216] guest clock: 1723684748.714418371
	I0815 01:19:08.753260   63299 fix.go:229] Guest: 2024-08-15 01:19:08.714418371 +0000 UTC Remote: 2024-08-15 01:19:08.643974847 +0000 UTC m=+20.852833463 (delta=70.443524ms)
	I0815 01:19:08.753291   63299 fix.go:200] guest clock delta is within tolerance: 70.443524ms
	I0815 01:19:08.753297   63299 start.go:83] releasing machines lock for "kubernetes-upgrade-146394", held for 20.810435446s
	I0815 01:19:08.753317   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:19:08.753575   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:19:08.756453   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.756792   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.756821   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.757006   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:19:08.757522   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:19:08.757681   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:19:08.757800   63299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:19:08.757841   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:08.757870   63299 ssh_runner.go:195] Run: cat /version.json
	I0815 01:19:08.757891   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:08.760766   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.760789   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.761131   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.761173   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.761200   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.761213   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.761314   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:08.761511   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:08.761676   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.761677   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.761837   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:08.761844   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:08.761983   63299 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa Username:docker}
	I0815 01:19:08.761983   63299 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa Username:docker}
	I0815 01:19:08.841502   63299 ssh_runner.go:195] Run: systemctl --version
	I0815 01:19:08.874089   63299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:19:09.020579   63299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:19:09.026562   63299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:19:09.026633   63299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:19:09.044643   63299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:19:09.044686   63299 start.go:495] detecting cgroup driver to use...
	I0815 01:19:09.044761   63299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:19:09.061861   63299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:19:09.076296   63299 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:19:09.076378   63299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:19:09.090842   63299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:19:09.103631   63299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:19:09.216218   63299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:19:09.390461   63299 docker.go:233] disabling docker service ...
	I0815 01:19:09.390530   63299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:19:09.404627   63299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:19:09.417547   63299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:19:09.546132   63299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:19:09.661977   63299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:19:09.675076   63299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:19:09.692400   63299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:19:09.692474   63299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.702269   63299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:19:09.702333   63299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.712748   63299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.722789   63299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.732589   63299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:19:09.742711   63299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.752460   63299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.768474   63299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.778549   63299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:19:09.787207   63299 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:19:09.787262   63299 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:19:09.798967   63299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:19:09.808021   63299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:19:09.930178   63299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:19:10.065578   63299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:19:10.065664   63299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:19:10.070333   63299 start.go:563] Will wait 60s for crictl version
	I0815 01:19:10.070388   63299 ssh_runner.go:195] Run: which crictl
	I0815 01:19:10.074236   63299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:19:10.121834   63299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:19:10.121957   63299 ssh_runner.go:195] Run: crio --version
	I0815 01:19:10.150305   63299 ssh_runner.go:195] Run: crio --version
	I0815 01:19:10.180341   63299 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:19:09.019262   63084 pod_ready.go:102] pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace has status "Ready":"False"
	I0815 01:19:11.023157   63084 pod_ready.go:102] pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace has status "Ready":"False"
	I0815 01:19:11.517731   63084 pod_ready.go:92] pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:11.517751   63084 pod_ready.go:81] duration metric: took 4.505856705s for pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:11.517761   63084 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:12.525658   63084 pod_ready.go:92] pod "etcd-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:12.525687   63084 pod_ready.go:81] duration metric: took 1.007917532s for pod "etcd-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:12.525700   63084 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:10.181492   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:19:10.184071   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:10.184494   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:10.184524   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:10.184752   63299 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 01:19:10.188580   63299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:19:10.200739   63299 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:19:10.200863   63299 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:19:10.200925   63299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:19:10.246754   63299 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:19:10.246824   63299 ssh_runner.go:195] Run: which lz4
	I0815 01:19:10.250585   63299 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:19:10.254389   63299 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:19:10.254419   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:19:11.482693   63299 crio.go:462] duration metric: took 1.232142964s to copy over tarball
	I0815 01:19:11.482763   63299 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:19:14.033094   63084 pod_ready.go:92] pod "kube-apiserver-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:14.033116   63084 pod_ready.go:81] duration metric: took 1.507407378s for pod "kube-apiserver-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:14.033126   63084 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:16.095360   63084 pod_ready.go:102] pod "kube-controller-manager-pause-064537" in "kube-system" namespace has status "Ready":"False"
	I0815 01:19:13.477196   63299 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.994397565s)
	I0815 01:19:13.477240   63299 crio.go:469] duration metric: took 1.994514s to extract the tarball
	I0815 01:19:13.477251   63299 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:19:13.514404   63299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:19:13.559065   63299 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:19:13.559088   63299 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:19:13.559095   63299 kubeadm.go:934] updating node { 192.168.72.130 8443 v1.31.0 crio true true} ...
	I0815 01:19:13.559189   63299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-146394 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:19:13.559246   63299 ssh_runner.go:195] Run: crio config
	I0815 01:19:13.601202   63299 cni.go:84] Creating CNI manager for ""
	I0815 01:19:13.601221   63299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:19:13.601230   63299 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:19:13.601252   63299 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-146394 NodeName:kubernetes-upgrade-146394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:19:13.601431   63299 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-146394"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:19:13.601505   63299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:19:13.610710   63299 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:19:13.610778   63299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:19:13.619166   63299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0815 01:19:13.634115   63299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:19:13.649083   63299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0815 01:19:13.664194   63299 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I0815 01:19:13.667528   63299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:19:13.678568   63299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:19:13.810749   63299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:19:13.827580   63299 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394 for IP: 192.168.72.130
	I0815 01:19:13.827605   63299 certs.go:194] generating shared ca certs ...
	I0815 01:19:13.827632   63299 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:19:13.827813   63299 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:19:13.827870   63299 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:19:13.827883   63299 certs.go:256] generating profile certs ...
	I0815 01:19:13.828000   63299 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.key
	I0815 01:19:13.828070   63299 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.key.6a0a8e0c
	I0815 01:19:13.828120   63299 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.key
	I0815 01:19:13.828250   63299 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:19:13.828284   63299 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:19:13.828298   63299 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:19:13.828330   63299 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:19:13.828359   63299 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:19:13.828388   63299 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:19:13.828443   63299 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:19:13.829289   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:19:13.855426   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:19:13.884301   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:19:13.929511   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:19:13.954894   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0815 01:19:13.979301   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:19:14.006175   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:19:14.032056   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:19:14.055525   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:19:14.077374   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:19:14.099513   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:19:14.121376   63299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:19:14.136538   63299 ssh_runner.go:195] Run: openssl version
	I0815 01:19:14.141756   63299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:19:14.152216   63299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:19:14.157453   63299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:19:14.157509   63299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:19:14.163023   63299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:19:14.172517   63299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:19:14.182240   63299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:19:14.186143   63299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:19:14.186195   63299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:19:14.191381   63299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:19:14.200965   63299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:19:14.210383   63299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:19:14.214168   63299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:19:14.214218   63299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:19:14.219301   63299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:19:14.228809   63299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:19:14.232583   63299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:19:14.237955   63299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:19:14.243236   63299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:19:14.248649   63299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:19:14.254071   63299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:19:14.259492   63299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:19:14.265009   63299 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:19:14.265121   63299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:19:14.265167   63299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:19:14.304190   63299 cri.go:89] found id: ""
	I0815 01:19:14.304258   63299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:19:14.313686   63299 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:19:14.313702   63299 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:19:14.313743   63299 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:19:14.322755   63299 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:19:14.323399   63299 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-146394" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:19:14.323745   63299 kubeconfig.go:62] /home/jenkins/minikube-integration/19443-13088/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-146394" cluster setting kubeconfig missing "kubernetes-upgrade-146394" context setting]
	I0815 01:19:14.324205   63299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:19:14.325085   63299 kapi.go:59] client config for kubernetes-upgrade-146394: &rest.Config{Host:"https://192.168.72.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.crt", KeyFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.key", CAFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 01:19:14.325686   63299 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:19:14.335018   63299 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta2
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.72.130
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-146394"
	   kubeletExtraArgs:
	     node-ip: 192.168.72.130
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta2
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	@@ -33,14 +33,12 @@
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.20.0
	+kubernetesVersion: v1.31.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	@@ -52,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I0815 01:19:14.335034   63299 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:19:14.335046   63299 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:19:14.335084   63299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:19:14.372825   63299 cri.go:89] found id: ""
	I0815 01:19:14.372885   63299 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:19:14.389545   63299 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:19:14.399792   63299 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:19:14.399807   63299 kubeadm.go:157] found existing configuration files:
	
	I0815 01:19:14.399869   63299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:19:14.409132   63299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:19:14.409186   63299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:19:14.419123   63299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:19:14.427390   63299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:19:14.427441   63299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:19:14.435793   63299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:19:14.443587   63299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:19:14.443645   63299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:19:14.452206   63299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:19:14.459942   63299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:19:14.459978   63299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:19:14.468240   63299 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:19:14.477239   63299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:14.585380   63299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:15.971753   63299 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.386339993s)
	I0815 01:19:15.971783   63299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:16.194708   63299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:16.260257   63299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:16.366713   63299 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:19:16.366788   63299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:16.867529   63299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:17.367658   63299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:18.039433   63084 pod_ready.go:92] pod "kube-controller-manager-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:18.039453   63084 pod_ready.go:81] duration metric: took 4.006320243s for pod "kube-controller-manager-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.039463   63084 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jkgw5" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.043917   63084 pod_ready.go:92] pod "kube-proxy-jkgw5" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:18.043932   63084 pod_ready.go:81] duration metric: took 4.462973ms for pod "kube-proxy-jkgw5" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.043940   63084 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.048312   63084 pod_ready.go:92] pod "kube-scheduler-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:18.048416   63084 pod_ready.go:81] duration metric: took 4.461567ms for pod "kube-scheduler-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.048439   63084 pod_ready.go:38] duration metric: took 11.040820763s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:19:18.048457   63084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:19:18.059929   63084 ops.go:34] apiserver oom_adj: -16
	I0815 01:19:18.059945   63084 kubeadm.go:597] duration metric: took 30.485173576s to restartPrimaryControlPlane
	I0815 01:19:18.059955   63084 kubeadm.go:394] duration metric: took 30.629506931s to StartCluster
	I0815 01:19:18.059972   63084 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:19:18.060056   63084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:19:18.061228   63084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:19:18.061441   63084 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:19:18.061508   63084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:19:18.061686   63084 config.go:182] Loaded profile config "pause-064537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:19:18.063294   63084 out.go:177] * Enabled addons: 
	I0815 01:19:18.063317   63084 out.go:177] * Verifying Kubernetes components...
	I0815 01:19:18.064508   63084 addons.go:510] duration metric: took 3.000402ms for enable addons: enabled=[]
	I0815 01:19:18.064602   63084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:19:18.223949   63084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:19:18.238421   63084 node_ready.go:35] waiting up to 6m0s for node "pause-064537" to be "Ready" ...
	I0815 01:19:18.241682   63084 node_ready.go:49] node "pause-064537" has status "Ready":"True"
	I0815 01:19:18.241707   63084 node_ready.go:38] duration metric: took 3.251397ms for node "pause-064537" to be "Ready" ...
	I0815 01:19:18.241727   63084 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:19:18.246810   63084 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.251879   63084 pod_ready.go:92] pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:18.251903   63084 pod_ready.go:81] duration metric: took 5.061525ms for pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.251914   63084 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.438780   63084 pod_ready.go:92] pod "etcd-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:18.438808   63084 pod_ready.go:81] duration metric: took 186.883645ms for pod "etcd-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.438820   63084 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.838486   63084 pod_ready.go:92] pod "kube-apiserver-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:18.838515   63084 pod_ready.go:81] duration metric: took 399.686358ms for pod "kube-apiserver-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.838529   63084 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:19.237366   63084 pod_ready.go:92] pod "kube-controller-manager-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:19.237390   63084 pod_ready.go:81] duration metric: took 398.85405ms for pod "kube-controller-manager-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:19.237400   63084 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jkgw5" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:19.637642   63084 pod_ready.go:92] pod "kube-proxy-jkgw5" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:19.637666   63084 pod_ready.go:81] duration metric: took 400.25949ms for pod "kube-proxy-jkgw5" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:19.637675   63084 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:20.037472   63084 pod_ready.go:92] pod "kube-scheduler-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:20.037505   63084 pod_ready.go:81] duration metric: took 399.822028ms for pod "kube-scheduler-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:20.037515   63084 pod_ready.go:38] duration metric: took 1.79577475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:19:20.037551   63084 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:19:20.037620   63084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:20.055453   63084 api_server.go:72] duration metric: took 1.993983569s to wait for apiserver process to appear ...
	I0815 01:19:20.055478   63084 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:19:20.055501   63084 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0815 01:19:20.062554   63084 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0815 01:19:20.063800   63084 api_server.go:141] control plane version: v1.31.0
	I0815 01:19:20.063820   63084 api_server.go:131] duration metric: took 8.334057ms to wait for apiserver health ...
	I0815 01:19:20.063830   63084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:19:20.239251   63084 system_pods.go:59] 6 kube-system pods found
	I0815 01:19:20.239282   63084 system_pods.go:61] "coredns-6f6b679f8f-gh5hb" [c05c76ba-24ca-4a03-8e94-52391b4ab036] Running
	I0815 01:19:20.239289   63084 system_pods.go:61] "etcd-pause-064537" [8c39e488-2339-4b28-bf0f-e01e3fa55fc9] Running
	I0815 01:19:20.239294   63084 system_pods.go:61] "kube-apiserver-pause-064537" [fc53227f-bae3-4591-aa7a-6646f81a49bd] Running
	I0815 01:19:20.239299   63084 system_pods.go:61] "kube-controller-manager-pause-064537" [1758ac28-2b2e-4f76-a3e8-0aa64241c05d] Running
	I0815 01:19:20.239304   63084 system_pods.go:61] "kube-proxy-jkgw5" [e749136f-57bd-41a0-aa1c-1d12c05445a4] Running
	I0815 01:19:20.239308   63084 system_pods.go:61] "kube-scheduler-pause-064537" [0fa69c33-02ff-497d-b53c-80e815733d40] Running
	I0815 01:19:20.239316   63084 system_pods.go:74] duration metric: took 175.478885ms to wait for pod list to return data ...
	I0815 01:19:20.239334   63084 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:19:20.437715   63084 default_sa.go:45] found service account: "default"
	I0815 01:19:20.437744   63084 default_sa.go:55] duration metric: took 198.402501ms for default service account to be created ...
	I0815 01:19:20.437755   63084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:19:20.640145   63084 system_pods.go:86] 6 kube-system pods found
	I0815 01:19:20.640186   63084 system_pods.go:89] "coredns-6f6b679f8f-gh5hb" [c05c76ba-24ca-4a03-8e94-52391b4ab036] Running
	I0815 01:19:20.640194   63084 system_pods.go:89] "etcd-pause-064537" [8c39e488-2339-4b28-bf0f-e01e3fa55fc9] Running
	I0815 01:19:20.640199   63084 system_pods.go:89] "kube-apiserver-pause-064537" [fc53227f-bae3-4591-aa7a-6646f81a49bd] Running
	I0815 01:19:20.640203   63084 system_pods.go:89] "kube-controller-manager-pause-064537" [1758ac28-2b2e-4f76-a3e8-0aa64241c05d] Running
	I0815 01:19:20.640208   63084 system_pods.go:89] "kube-proxy-jkgw5" [e749136f-57bd-41a0-aa1c-1d12c05445a4] Running
	I0815 01:19:20.640212   63084 system_pods.go:89] "kube-scheduler-pause-064537" [0fa69c33-02ff-497d-b53c-80e815733d40] Running
	I0815 01:19:20.640219   63084 system_pods.go:126] duration metric: took 202.458517ms to wait for k8s-apps to be running ...
	I0815 01:19:20.640227   63084 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:19:20.640288   63084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:19:20.655626   63084 system_svc.go:56] duration metric: took 15.388152ms WaitForService to wait for kubelet
	I0815 01:19:20.655659   63084 kubeadm.go:582] duration metric: took 2.594193144s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:19:20.655681   63084 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:19:20.837367   63084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:19:20.837399   63084 node_conditions.go:123] node cpu capacity is 2
	I0815 01:19:20.837412   63084 node_conditions.go:105] duration metric: took 181.72528ms to run NodePressure ...
	I0815 01:19:20.837427   63084 start.go:241] waiting for startup goroutines ...
	I0815 01:19:20.837437   63084 start.go:246] waiting for cluster config update ...
	I0815 01:19:20.837445   63084 start.go:255] writing updated cluster config ...
	I0815 01:19:20.837760   63084 ssh_runner.go:195] Run: rm -f paused
	I0815 01:19:20.887164   63084 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:19:20.889269   63084 out.go:177] * Done! kubectl is now configured to use "pause-064537" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.512244113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684761512209183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=460f350f-925e-4f0a-8356-a8e007c1701a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.512960708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c7daed5-dfb0-4a45-962a-eef20d305147 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.513047796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c7daed5-dfb0-4a45-962a-eef20d305147 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.513502646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47863b4ea2f8d913fdf7cbb5f0041cd0df0f641022c6baeb306212e0deaf911b,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684746435180864,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f20a0bc1b3951593018f5b971220316469b8a9b84793426ad9e61a4629056,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723684742632809504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb15264d7125766e9ca5fae54c2d596f8f938c054944ab242a1c7d18381cba44,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723684742686216021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c110
1592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0e4e057a27a303de0ea9b2cc8b1234376aae9d629d3c4d79e228d540d904c7,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723684742646427651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa
95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73e18991895ee3a304da7d9f717d443cac3579f116a551edd2fdb5490e59556,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723684742624474261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,}
,Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83469d0f301fac5dfb4c6cb368c0c3bd49b17dc9accd7de423fbfcd8f20d21de,PodSandboxId:9dfbaae81b04735b0bdef9be22ebc6b517e8e3e7cb2722a1c8194a36b53e5084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723684726750383867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684727273916867,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723684726388612626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c1101592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723684726393276631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723684726298729502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723684726294020733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a,PodSandboxId:9aa15f7f056acb4bc089f05ac8510f19df4e0eafa7612abfdc1169402b013855,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723684673762328170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c7daed5-dfb0-4a45-962a-eef20d305147 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.561385086Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24233b35-2556-44cf-870f-740122b500c4 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.561461100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24233b35-2556-44cf-870f-740122b500c4 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.562516353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c28e50ed-85e9-439a-a59d-f0478d0752d3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.562928126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684761562903061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c28e50ed-85e9-439a-a59d-f0478d0752d3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.563490577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfa581da-2ab8-4f5c-9a73-f234e8c3e0ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.563562531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfa581da-2ab8-4f5c-9a73-f234e8c3e0ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.563831497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47863b4ea2f8d913fdf7cbb5f0041cd0df0f641022c6baeb306212e0deaf911b,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684746435180864,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f20a0bc1b3951593018f5b971220316469b8a9b84793426ad9e61a4629056,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723684742632809504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb15264d7125766e9ca5fae54c2d596f8f938c054944ab242a1c7d18381cba44,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723684742686216021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c110
1592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0e4e057a27a303de0ea9b2cc8b1234376aae9d629d3c4d79e228d540d904c7,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723684742646427651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa
95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73e18991895ee3a304da7d9f717d443cac3579f116a551edd2fdb5490e59556,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723684742624474261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,}
,Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83469d0f301fac5dfb4c6cb368c0c3bd49b17dc9accd7de423fbfcd8f20d21de,PodSandboxId:9dfbaae81b04735b0bdef9be22ebc6b517e8e3e7cb2722a1c8194a36b53e5084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723684726750383867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684727273916867,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723684726388612626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c1101592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723684726393276631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723684726298729502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723684726294020733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a,PodSandboxId:9aa15f7f056acb4bc089f05ac8510f19df4e0eafa7612abfdc1169402b013855,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723684673762328170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cfa581da-2ab8-4f5c-9a73-f234e8c3e0ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.603804424Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba7f9537-b750-412d-85fb-5a5d60cea3d6 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.603918631Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba7f9537-b750-412d-85fb-5a5d60cea3d6 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.604928884Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa092724-5ac3-4cf4-9de9-4ef2e51ad6f2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.605355241Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684761605328922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa092724-5ac3-4cf4-9de9-4ef2e51ad6f2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.605798007Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0b91b5e-c9e3-4902-9a01-76b6a0555ac7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.605884675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0b91b5e-c9e3-4902-9a01-76b6a0555ac7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.606266883Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47863b4ea2f8d913fdf7cbb5f0041cd0df0f641022c6baeb306212e0deaf911b,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684746435180864,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f20a0bc1b3951593018f5b971220316469b8a9b84793426ad9e61a4629056,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723684742632809504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb15264d7125766e9ca5fae54c2d596f8f938c054944ab242a1c7d18381cba44,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723684742686216021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c110
1592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0e4e057a27a303de0ea9b2cc8b1234376aae9d629d3c4d79e228d540d904c7,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723684742646427651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa
95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73e18991895ee3a304da7d9f717d443cac3579f116a551edd2fdb5490e59556,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723684742624474261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,}
,Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83469d0f301fac5dfb4c6cb368c0c3bd49b17dc9accd7de423fbfcd8f20d21de,PodSandboxId:9dfbaae81b04735b0bdef9be22ebc6b517e8e3e7cb2722a1c8194a36b53e5084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723684726750383867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684727273916867,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723684726388612626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c1101592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723684726393276631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723684726298729502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723684726294020733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a,PodSandboxId:9aa15f7f056acb4bc089f05ac8510f19df4e0eafa7612abfdc1169402b013855,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723684673762328170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0b91b5e-c9e3-4902-9a01-76b6a0555ac7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.647665423Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e16f6d7d-295f-4e65-b979-5566241ad5a4 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.647749265Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e16f6d7d-295f-4e65-b979-5566241ad5a4 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.648681725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=027ee628-feac-4d6b-9a4d-2fbd8d5bf9ee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.649101601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684761649077047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=027ee628-feac-4d6b-9a4d-2fbd8d5bf9ee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.649840851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3ec3057-9d79-4837-8659-eec38407333a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.649913664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3ec3057-9d79-4837-8659-eec38407333a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:21 pause-064537 crio[2356]: time="2024-08-15 01:19:21.650235262Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47863b4ea2f8d913fdf7cbb5f0041cd0df0f641022c6baeb306212e0deaf911b,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684746435180864,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f20a0bc1b3951593018f5b971220316469b8a9b84793426ad9e61a4629056,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723684742632809504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb15264d7125766e9ca5fae54c2d596f8f938c054944ab242a1c7d18381cba44,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723684742686216021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c110
1592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0e4e057a27a303de0ea9b2cc8b1234376aae9d629d3c4d79e228d540d904c7,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723684742646427651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa
95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73e18991895ee3a304da7d9f717d443cac3579f116a551edd2fdb5490e59556,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723684742624474261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,}
,Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83469d0f301fac5dfb4c6cb368c0c3bd49b17dc9accd7de423fbfcd8f20d21de,PodSandboxId:9dfbaae81b04735b0bdef9be22ebc6b517e8e3e7cb2722a1c8194a36b53e5084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723684726750383867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684727273916867,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723684726388612626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c1101592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723684726393276631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723684726298729502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723684726294020733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a,PodSandboxId:9aa15f7f056acb4bc089f05ac8510f19df4e0eafa7612abfdc1169402b013855,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723684673762328170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3ec3057-9d79-4837-8659-eec38407333a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	47863b4ea2f8d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 seconds ago       Running             coredns                   2                   a4858a59d1389       coredns-6f6b679f8f-gh5hb
	cb15264d71257       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   19 seconds ago       Running             etcd                      2                   9405237eeeaea       etcd-pause-064537
	2d0e4e057a27a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   19 seconds ago       Running             kube-controller-manager   2                   d7637483c0794       kube-controller-manager-pause-064537
	804f20a0bc1b3       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   19 seconds ago       Running             kube-scheduler            2                   c58432f91fc8d       kube-scheduler-pause-064537
	c73e18991895e       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   19 seconds ago       Running             kube-apiserver            2                   584a51b67fa8c       kube-apiserver-pause-064537
	b32a9de8341b7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   34 seconds ago       Exited              coredns                   1                   a4858a59d1389       coredns-6f6b679f8f-gh5hb
	83469d0f301fa       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   34 seconds ago       Running             kube-proxy                1                   9dfbaae81b047       kube-proxy-jkgw5
	65bdb78cfa5fb       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   35 seconds ago       Exited              kube-scheduler            1                   c58432f91fc8d       kube-scheduler-pause-064537
	5c6b7316f2555       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   35 seconds ago       Exited              etcd                      1                   9405237eeeaea       etcd-pause-064537
	2e0a1e80f7a7f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   35 seconds ago       Exited              kube-apiserver            1                   584a51b67fa8c       kube-apiserver-pause-064537
	e3eb9eded28db       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   35 seconds ago       Exited              kube-controller-manager   1                   d7637483c0794       kube-controller-manager-pause-064537
	618741a2dad66       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Exited              kube-proxy                0                   9aa15f7f056ac       kube-proxy-jkgw5
	
	
	==> coredns [47863b4ea2f8d913fdf7cbb5f0041cd0df0f641022c6baeb306212e0deaf911b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35450 - 33988 "HINFO IN 7249281135217051061.6073457569363123844. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009831687s
	
	
	==> coredns [b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2] <==
	
	
	==> describe nodes <==
	Name:               pause-064537
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-064537
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=pause-064537
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T01_17_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 01:17:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-064537
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:19:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 01:19:05 +0000   Thu, 15 Aug 2024 01:17:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 01:19:05 +0000   Thu, 15 Aug 2024 01:17:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 01:19:05 +0000   Thu, 15 Aug 2024 01:17:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 01:19:05 +0000   Thu, 15 Aug 2024 01:17:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.243
	  Hostname:    pause-064537
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7bf3990181f484babb24cff6639b727
	  System UUID:                b7bf3990-181f-484b-abb2-4cff6639b727
	  Boot ID:                    eaa23830-0bee-4122-848c-6beb45e711c3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-gh5hb                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     88s
	  kube-system                 etcd-pause-064537                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         94s
	  kube-system                 kube-apiserver-pause-064537             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-controller-manager-pause-064537    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-jkgw5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-pause-064537             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 87s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientPID     99s (x7 over 99s)  kubelet          Node pause-064537 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    99s (x8 over 99s)  kubelet          Node pause-064537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  99s (x8 over 99s)  kubelet          Node pause-064537 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 94s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  94s                kubelet          Node pause-064537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s                kubelet          Node pause-064537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s                kubelet          Node pause-064537 status is now: NodeHasSufficientPID
	  Normal  NodeReady                93s                kubelet          Node pause-064537 status is now: NodeReady
	  Normal  RegisteredNode           89s                node-controller  Node pause-064537 event: Registered Node pause-064537 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node pause-064537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node pause-064537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node pause-064537 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13s                node-controller  Node pause-064537 event: Registered Node pause-064537 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.675154] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.063212] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060959] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.192833] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.111668] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.256029] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.044902] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +3.905562] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.065128] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.978760] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.080584] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.807129] systemd-fstab-generator[1372]: Ignoring "noauto" option for root device
	[  +0.117441] kauditd_printk_skb: 21 callbacks suppressed
	[Aug15 01:18] kauditd_printk_skb: 89 callbacks suppressed
	[ +38.665523] systemd-fstab-generator[2274]: Ignoring "noauto" option for root device
	[  +0.139479] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.156231] systemd-fstab-generator[2300]: Ignoring "noauto" option for root device
	[  +0.120252] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	[  +0.267880] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +1.740552] systemd-fstab-generator[2463]: Ignoring "noauto" option for root device
	[  +4.293576] kauditd_printk_skb: 196 callbacks suppressed
	[Aug15 01:19] systemd-fstab-generator[3243]: Ignoring "noauto" option for root device
	[  +6.995050] kauditd_printk_skb: 46 callbacks suppressed
	[  +9.194430] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	
	
	==> etcd [5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da] <==
	{"level":"info","ts":"2024-08-15T01:18:48.146344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T01:18:48.146407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f received MsgPreVoteResp from 704fd09e1c9dce1f at term 2"}
	{"level":"info","ts":"2024-08-15T01:18:48.146462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T01:18:48.146494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f received MsgVoteResp from 704fd09e1c9dce1f at term 3"}
	{"level":"info","ts":"2024-08-15T01:18:48.146529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became leader at term 3"}
	{"level":"info","ts":"2024-08-15T01:18:48.146561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 704fd09e1c9dce1f elected leader 704fd09e1c9dce1f at term 3"}
	{"level":"info","ts":"2024-08-15T01:18:48.150391Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"704fd09e1c9dce1f","local-member-attributes":"{Name:pause-064537 ClientURLs:[https://192.168.61.243:2379]}","request-path":"/0/members/704fd09e1c9dce1f/attributes","cluster-id":"29cc905037b78c6d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T01:18:48.151517Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:18:48.159511Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:18:48.160334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.243:2379"}
	{"level":"info","ts":"2024-08-15T01:18:48.160723Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:18:48.171586Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:18:48.172444Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T01:18:48.206266Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T01:18:48.206306Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T01:18:50.600600Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-15T01:18:50.600717Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-064537","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.243:2380"],"advertise-client-urls":["https://192.168.61.243:2379"]}
	{"level":"warn","ts":"2024-08-15T01:18:50.600835Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T01:18:50.600953Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T01:18:50.622647Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.243:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T01:18:50.622862Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.243:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T01:18:50.622955Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"704fd09e1c9dce1f","current-leader-member-id":"704fd09e1c9dce1f"}
	{"level":"info","ts":"2024-08-15T01:18:50.627777Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.61.243:2380"}
	{"level":"info","ts":"2024-08-15T01:18:50.627999Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.61.243:2380"}
	{"level":"info","ts":"2024-08-15T01:18:50.628051Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-064537","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.243:2380"],"advertise-client-urls":["https://192.168.61.243:2379"]}
	
	
	==> etcd [cb15264d7125766e9ca5fae54c2d596f8f938c054944ab242a1c7d18381cba44] <==
	{"level":"info","ts":"2024-08-15T01:19:03.068344Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"704fd09e1c9dce1f","initial-advertise-peer-urls":["https://192.168.61.243:2380"],"listen-peer-urls":["https://192.168.61.243:2380"],"advertise-client-urls":["https://192.168.61.243:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.243:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T01:19:03.068380Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T01:19:03.068467Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.243:2380"}
	{"level":"info","ts":"2024-08-15T01:19:03.068486Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.243:2380"}
	{"level":"info","ts":"2024-08-15T01:19:04.088885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-15T01:19:04.088937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-15T01:19:04.088972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f received MsgPreVoteResp from 704fd09e1c9dce1f at term 3"}
	{"level":"info","ts":"2024-08-15T01:19:04.088987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became candidate at term 4"}
	{"level":"info","ts":"2024-08-15T01:19:04.088993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f received MsgVoteResp from 704fd09e1c9dce1f at term 4"}
	{"level":"info","ts":"2024-08-15T01:19:04.089001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became leader at term 4"}
	{"level":"info","ts":"2024-08-15T01:19:04.089008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 704fd09e1c9dce1f elected leader 704fd09e1c9dce1f at term 4"}
	{"level":"info","ts":"2024-08-15T01:19:04.094222Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"704fd09e1c9dce1f","local-member-attributes":"{Name:pause-064537 ClientURLs:[https://192.168.61.243:2379]}","request-path":"/0/members/704fd09e1c9dce1f/attributes","cluster-id":"29cc905037b78c6d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T01:19:04.094231Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:19:04.094430Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:19:04.094627Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T01:19:04.094640Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T01:19:04.095227Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:19:04.095423Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:19:04.096046Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T01:19:04.096398Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.243:2379"}
	{"level":"warn","ts":"2024-08-15T01:19:15.744674Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.557039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-064537\" ","response":"range_response_count:1 size:6601"}
	{"level":"info","ts":"2024-08-15T01:19:15.744757Z","caller":"traceutil/trace.go:171","msg":"trace[1999437051] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-064537; range_end:; response_count:1; response_revision:518; }","duration":"222.688922ms","start":"2024-08-15T01:19:15.522053Z","end":"2024-08-15T01:19:15.744742Z","steps":["trace[1999437051] 'range keys from in-memory index tree'  (duration: 222.260232ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T01:19:15.813803Z","caller":"traceutil/trace.go:171","msg":"trace[247297676] transaction","detail":"{read_only:false; response_revision:519; number_of_response:1; }","duration":"182.061253ms","start":"2024-08-15T01:19:15.631728Z","end":"2024-08-15T01:19:15.813789Z","steps":["trace[247297676] 'process raft request'  (duration: 181.727886ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:19:16.075889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.565243ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T01:19:16.076047Z","caller":"traceutil/trace.go:171","msg":"trace[1950054731] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:519; }","duration":"221.745516ms","start":"2024-08-15T01:19:15.854286Z","end":"2024-08-15T01:19:16.076032Z","steps":["trace[1950054731] 'range keys from in-memory index tree'  (duration: 221.543732ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:19:22 up 2 min,  0 users,  load average: 1.45, 0.49, 0.18
	Linux pause-064537 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657] <==
	W0815 01:18:59.732907       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:18:59.734293       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:18:59.822711       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:18:59.847017       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:18:59.884874       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:18:59.887324       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:18:59.959722       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.049606       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.068441       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.084847       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.099415       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.165852       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.178601       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.270029       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.368078       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.408524       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.449046       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.460544       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.461862       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.497397       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.547525       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.578522       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.652025       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.655737       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.686100       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c73e18991895ee3a304da7d9f717d443cac3579f116a551edd2fdb5490e59556] <==
	I0815 01:19:05.258423       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 01:19:05.258516       1 policy_source.go:224] refreshing policies
	I0815 01:19:05.290793       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 01:19:05.303385       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 01:19:05.306723       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 01:19:05.306756       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 01:19:05.317492       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 01:19:05.318863       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 01:19:05.324966       1 aggregator.go:171] initial CRD sync complete...
	I0815 01:19:05.324990       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 01:19:05.324997       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 01:19:05.325003       1 cache.go:39] Caches are synced for autoregister controller
	I0815 01:19:05.329754       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 01:19:05.382928       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 01:19:05.383501       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 01:19:05.387247       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 01:19:05.397276       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 01:19:06.191785       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 01:19:06.840743       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 01:19:06.857035       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 01:19:06.901848       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 01:19:06.927609       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 01:19:06.933628       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 01:19:08.840880       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 01:19:08.893833       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2d0e4e057a27a303de0ea9b2cc8b1234376aae9d629d3c4d79e228d540d904c7] <==
	I0815 01:19:08.582573       1 shared_informer.go:320] Caches are synced for persistent volume
	I0815 01:19:08.585905       1 shared_informer.go:320] Caches are synced for TTL
	I0815 01:19:08.587203       1 shared_informer.go:320] Caches are synced for service account
	I0815 01:19:08.588379       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0815 01:19:08.588579       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0815 01:19:08.588773       1 shared_informer.go:320] Caches are synced for PVC protection
	I0815 01:19:08.588800       1 shared_informer.go:320] Caches are synced for PV protection
	I0815 01:19:08.588842       1 shared_informer.go:320] Caches are synced for stateful set
	I0815 01:19:08.594509       1 shared_informer.go:320] Caches are synced for endpoint
	I0815 01:19:08.597424       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0815 01:19:08.601554       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0815 01:19:08.608644       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="86.60911ms"
	I0815 01:19:08.609472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="90.663µs"
	I0815 01:19:08.616196       1 shared_informer.go:320] Caches are synced for daemon sets
	I0815 01:19:08.642931       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0815 01:19:08.748204       1 shared_informer.go:320] Caches are synced for deployment
	I0815 01:19:08.759796       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 01:19:08.763200       1 shared_informer.go:320] Caches are synced for attach detach
	I0815 01:19:08.794803       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 01:19:08.838208       1 shared_informer.go:320] Caches are synced for disruption
	I0815 01:19:09.237989       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 01:19:09.238029       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0815 01:19:09.239808       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 01:19:11.046087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="22.832616ms"
	I0815 01:19:11.046585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="109.946µs"
	
	
	==> kube-controller-manager [e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9] <==
	I0815 01:18:48.137410       1 serving.go:386] Generated self-signed cert in-memory
	I0815 01:18:48.556066       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 01:18:48.556101       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:18:48.559864       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 01:18:48.560235       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 01:18:48.560263       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 01:18:48.560291       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:17:54.301643       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 01:17:54.312152       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.243"]
	E0815 01:17:54.312241       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 01:17:54.349845       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 01:17:54.349885       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 01:17:54.349910       1 server_linux.go:169] "Using iptables Proxier"
	I0815 01:17:54.352153       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 01:17:54.352484       1 server.go:483] "Version info" version="v1.31.0"
	I0815 01:17:54.352568       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:17:54.353826       1 config.go:197] "Starting service config controller"
	I0815 01:17:54.353877       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 01:17:54.353909       1 config.go:104] "Starting endpoint slice config controller"
	I0815 01:17:54.353924       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 01:17:54.354486       1 config.go:326] "Starting node config controller"
	I0815 01:17:54.354526       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 01:17:54.454734       1 shared_informer.go:320] Caches are synced for node config
	I0815 01:17:54.454828       1 shared_informer.go:320] Caches are synced for service config
	I0815 01:17:54.454863       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [83469d0f301fac5dfb4c6cb368c0c3bd49b17dc9accd7de423fbfcd8f20d21de] <==
	 >
	E0815 01:18:48.569207       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:18:49.945563       1 server.go:666] "Failed to retrieve node info" err="nodes \"pause-064537\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]"
	E0815 01:19:01.764758       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-064537\": dial tcp 192.168.61.243:8443: connect: connection refused - error from a previous attempt: unexpected EOF"
	I0815 01:19:05.343138       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.243"]
	E0815 01:19:05.343300       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 01:19:05.410938       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 01:19:05.411020       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 01:19:05.411062       1 server_linux.go:169] "Using iptables Proxier"
	I0815 01:19:05.413480       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 01:19:05.413783       1 server.go:483] "Version info" version="v1.31.0"
	I0815 01:19:05.413950       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:19:05.415183       1 config.go:197] "Starting service config controller"
	I0815 01:19:05.415274       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 01:19:05.415322       1 config.go:104] "Starting endpoint slice config controller"
	I0815 01:19:05.415340       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 01:19:05.415814       1 config.go:326] "Starting node config controller"
	I0815 01:19:05.415892       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 01:19:05.515349       1 shared_informer.go:320] Caches are synced for service config
	I0815 01:19:05.515469       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 01:19:05.516216       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d] <==
	I0815 01:18:48.532316       1 serving.go:386] Generated self-signed cert in-memory
	W0815 01:18:49.879413       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 01:18:49.879514       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 01:18:49.879577       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 01:18:49.879604       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 01:18:49.984895       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 01:18:49.984986       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0815 01:18:49.985065       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0815 01:18:49.987575       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0815 01:18:49.991958       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0815 01:18:49.991323       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 01:18:49.991307       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 01:18:49.993287       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0815 01:18:49.994499       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [804f20a0bc1b3951593018f5b971220316469b8a9b84793426ad9e61a4629056] <==
	I0815 01:19:03.603500       1 serving.go:386] Generated self-signed cert in-memory
	W0815 01:19:05.232947       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 01:19:05.233056       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 01:19:05.233087       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 01:19:05.233151       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 01:19:05.316720       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 01:19:05.316756       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:19:05.328823       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 01:19:05.332267       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 01:19:05.332312       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 01:19:05.332339       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 01:19:05.433024       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.335046    3250 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb3ed9bf63f0c0aa95b78896f2b0f6a3-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-064537\" (UID: \"eb3ed9bf63f0c0aa95b78896f2b0f6a3\") " pod="kube-system/kube-controller-manager-pause-064537"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.335091    3250 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/03e7be76a9c4e873c0614c1101592575-etcd-certs\") pod \"etcd-pause-064537\" (UID: \"03e7be76a9c4e873c0614c1101592575\") " pod="kube-system/etcd-pause-064537"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: E0815 01:19:02.335078    3250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-064537?timeout=10s\": dial tcp 192.168.61.243:8443: connect: connection refused" interval="400ms"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.493261    3250 kubelet_node_status.go:72] "Attempting to register node" node="pause-064537"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: E0815 01:19:02.494223    3250 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.243:8443: connect: connection refused" node="pause-064537"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.610715    3250 scope.go:117] "RemoveContainer" containerID="2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.613470    3250 scope.go:117] "RemoveContainer" containerID="e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.615445    3250 scope.go:117] "RemoveContainer" containerID="65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.616301    3250 scope.go:117] "RemoveContainer" containerID="5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: E0815 01:19:02.737976    3250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-064537?timeout=10s\": dial tcp 192.168.61.243:8443: connect: connection refused" interval="800ms"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.896270    3250 kubelet_node_status.go:72] "Attempting to register node" node="pause-064537"
	Aug 15 01:19:05 pause-064537 kubelet[3250]: I0815 01:19:05.341264    3250 kubelet_node_status.go:111] "Node was previously registered" node="pause-064537"
	Aug 15 01:19:05 pause-064537 kubelet[3250]: I0815 01:19:05.341448    3250 kubelet_node_status.go:75] "Successfully registered node" node="pause-064537"
	Aug 15 01:19:05 pause-064537 kubelet[3250]: I0815 01:19:05.341480    3250 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 15 01:19:05 pause-064537 kubelet[3250]: I0815 01:19:05.342602    3250 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 15 01:19:06 pause-064537 kubelet[3250]: I0815 01:19:06.117964    3250 apiserver.go:52] "Watching apiserver"
	Aug 15 01:19:06 pause-064537 kubelet[3250]: I0815 01:19:06.124246    3250 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 15 01:19:06 pause-064537 kubelet[3250]: I0815 01:19:06.127882    3250 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e749136f-57bd-41a0-aa1c-1d12c05445a4-lib-modules\") pod \"kube-proxy-jkgw5\" (UID: \"e749136f-57bd-41a0-aa1c-1d12c05445a4\") " pod="kube-system/kube-proxy-jkgw5"
	Aug 15 01:19:06 pause-064537 kubelet[3250]: I0815 01:19:06.128039    3250 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e749136f-57bd-41a0-aa1c-1d12c05445a4-xtables-lock\") pod \"kube-proxy-jkgw5\" (UID: \"e749136f-57bd-41a0-aa1c-1d12c05445a4\") " pod="kube-system/kube-proxy-jkgw5"
	Aug 15 01:19:06 pause-064537 kubelet[3250]: I0815 01:19:06.423208    3250 scope.go:117] "RemoveContainer" containerID="b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2"
	Aug 15 01:19:10 pause-064537 kubelet[3250]: I0815 01:19:10.999629    3250 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 15 01:19:12 pause-064537 kubelet[3250]: E0815 01:19:12.192709    3250 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684752192055489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:19:12 pause-064537 kubelet[3250]: E0815 01:19:12.192731    3250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684752192055489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:19:22 pause-064537 kubelet[3250]: E0815 01:19:22.194193    3250 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684762193862619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:19:22 pause-064537 kubelet[3250]: E0815 01:19:22.194251    3250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684762193862619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-064537 -n pause-064537
helpers_test.go:261: (dbg) Run:  kubectl --context pause-064537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-064537 -n pause-064537
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-064537 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-064537 logs -n 25: (1.268350068s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-339919             | running-upgrade-339919    | jenkins | v1.33.1 | 15 Aug 24 01:14 UTC | 15 Aug 24 01:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:14 UTC | 15 Aug 24 01:14 UTC |
	| start   | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:14 UTC | 15 Aug 24 01:15 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-312183 sudo           | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:15 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-284326 stop           | minikube                  | jenkins | v1.26.0 | 15 Aug 24 01:15 UTC | 15 Aug 24 01:15 UTC |
	| start   | -p stopped-upgrade-284326             | stopped-upgrade-284326    | jenkins | v1.33.1 | 15 Aug 24 01:15 UTC | 15 Aug 24 01:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	| start   | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-339919             | running-upgrade-339919    | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	| start   | -p cert-expiration-131152             | cert-expiration-131152    | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-284326             | stopped-upgrade-284326    | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	| start   | -p force-systemd-flag-221548          | force-systemd-flag-221548 | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:17 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-312183 sudo           | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-312183                | NoKubernetes-312183       | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:16 UTC |
	| start   | -p pause-064537 --memory=2048         | pause-064537              | jenkins | v1.33.1 | 15 Aug 24 01:16 UTC | 15 Aug 24 01:18 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-221548 ssh cat     | force-systemd-flag-221548 | jenkins | v1.33.1 | 15 Aug 24 01:17 UTC | 15 Aug 24 01:17 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-221548          | force-systemd-flag-221548 | jenkins | v1.33.1 | 15 Aug 24 01:17 UTC | 15 Aug 24 01:17 UTC |
	| start   | -p cert-options-411164                | cert-options-411164       | jenkins | v1.33.1 | 15 Aug 24 01:17 UTC | 15 Aug 24 01:18 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-411164 ssh               | cert-options-411164       | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:18 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-411164 -- sudo        | cert-options-411164       | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:18 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-411164                | cert-options-411164       | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:18 UTC |
	| start   | -p old-k8s-version-390782             | old-k8s-version-390782    | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p pause-064537                       | pause-064537              | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:19 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-146394          | kubernetes-upgrade-146394 | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC | 15 Aug 24 01:18 UTC |
	| start   | -p kubernetes-upgrade-146394          | kubernetes-upgrade-146394 | jenkins | v1.33.1 | 15 Aug 24 01:18 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:18:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:18:47.833617   63299 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:18:47.833743   63299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:18:47.833754   63299 out.go:304] Setting ErrFile to fd 2...
	I0815 01:18:47.833767   63299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:18:47.833930   63299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:18:47.834422   63299 out.go:298] Setting JSON to false
	I0815 01:18:47.835362   63299 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7273,"bootTime":1723677455,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:18:47.835425   63299 start.go:139] virtualization: kvm guest
	I0815 01:18:47.837722   63299 out.go:177] * [kubernetes-upgrade-146394] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:18:47.839188   63299 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:18:47.839180   63299 notify.go:220] Checking for updates...
	I0815 01:18:47.840644   63299 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:18:47.842289   63299 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:18:47.843671   63299 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:18:47.844792   63299 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:18:47.845922   63299 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:18:47.847179   63299 config.go:182] Loaded profile config "kubernetes-upgrade-146394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:18:47.847687   63299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:18:47.847751   63299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:18:47.864406   63299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0815 01:18:47.864885   63299 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:18:47.865445   63299 main.go:141] libmachine: Using API Version  1
	I0815 01:18:47.865473   63299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:18:47.865851   63299 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:18:47.866089   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:18:47.866362   63299 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:18:47.866799   63299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:18:47.866848   63299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:18:47.883108   63299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I0815 01:18:47.883539   63299 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:18:47.884058   63299 main.go:141] libmachine: Using API Version  1
	I0815 01:18:47.884092   63299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:18:47.884420   63299 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:18:47.884607   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:18:47.921712   63299 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 01:18:47.922869   63299 start.go:297] selected driver: kvm2
	I0815 01:18:47.922888   63299 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:18:47.923010   63299 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:18:47.923782   63299 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:18:47.923842   63299 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:18:47.938927   63299 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:18:47.939433   63299 cni.go:84] Creating CNI manager for ""
	I0815 01:18:47.939453   63299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:18:47.939513   63299 start.go:340] cluster config:
	{Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-146394 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:18:47.939652   63299 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:18:47.941277   63299 out.go:177] * Starting "kubernetes-upgrade-146394" primary control-plane node in "kubernetes-upgrade-146394" cluster
	I0815 01:18:47.942256   63299 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:18:47.942294   63299 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:18:47.942304   63299 cache.go:56] Caching tarball of preloaded images
	I0815 01:18:47.942396   63299 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:18:47.942414   63299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:18:47.942523   63299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/config.json ...
	I0815 01:18:47.942768   63299 start.go:360] acquireMachinesLock for kubernetes-upgrade-146394: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:18:47.942848   63299 start.go:364] duration metric: took 43.534µs to acquireMachinesLock for "kubernetes-upgrade-146394"
	I0815 01:18:47.942870   63299 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:18:47.942885   63299 fix.go:54] fixHost starting: 
	I0815 01:18:47.943275   63299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:18:47.943314   63299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:18:47.957726   63299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I0815 01:18:47.958157   63299 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:18:47.958721   63299 main.go:141] libmachine: Using API Version  1
	I0815 01:18:47.958749   63299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:18:47.959086   63299 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:18:47.959309   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:18:47.959464   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetState
	I0815 01:18:47.961146   63299 fix.go:112] recreateIfNeeded on kubernetes-upgrade-146394: state=Stopped err=<nil>
	I0815 01:18:47.961176   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	W0815 01:18:47.961322   63299 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:18:47.962996   63299 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-146394" ...
	I0815 01:18:47.963965   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .Start
	I0815 01:18:47.964134   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Ensuring networks are active...
	I0815 01:18:47.964862   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Ensuring network default is active
	I0815 01:18:47.965271   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Ensuring network mk-kubernetes-upgrade-146394 is active
	I0815 01:18:47.965715   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Getting domain xml...
	I0815 01:18:47.966554   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Creating domain...
	I0815 01:18:49.216233   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Waiting to get IP...
	I0815 01:18:49.217172   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:49.217678   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:49.217750   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:49.217644   63332 retry.go:31] will retry after 190.327372ms: waiting for machine to come up
	I0815 01:18:49.410295   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:49.410884   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:49.410904   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:49.410846   63332 retry.go:31] will retry after 290.652704ms: waiting for machine to come up
	I0815 01:18:49.703506   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:49.704057   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:49.704083   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:49.703988   63332 retry.go:31] will retry after 374.905949ms: waiting for machine to come up
	I0815 01:18:50.080861   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:50.081454   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:50.081518   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:50.081413   63332 retry.go:31] will retry after 380.337818ms: waiting for machine to come up
	I0815 01:18:50.462794   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:50.463420   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:50.463444   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:50.463364   63332 retry.go:31] will retry after 697.728389ms: waiting for machine to come up
	I0815 01:18:51.162604   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:51.163137   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:51.163162   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:51.163080   63332 retry.go:31] will retry after 949.275888ms: waiting for machine to come up
	I0815 01:18:52.113648   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:52.114051   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:52.114072   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:52.114011   63332 retry.go:31] will retry after 1.172343668s: waiting for machine to come up
	I0815 01:18:53.287530   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:53.288034   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:53.288059   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:53.287992   63332 retry.go:31] will retry after 1.308726981s: waiting for machine to come up
	I0815 01:18:54.598276   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:54.598775   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:54.598802   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:54.598735   63332 retry.go:31] will retry after 1.20091007s: waiting for machine to come up
	I0815 01:18:55.800847   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:55.801341   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:55.801369   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:55.801280   63332 retry.go:31] will retry after 2.080792306s: waiting for machine to come up
	I0815 01:19:00.938095   63084 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2 65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d 5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da 2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657 e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9 7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2 618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a 75c4acd722339192c314d4a56d694984a3727fcea92ca1a0453ca7fae22aa897 5dd791247d2d708d075f10a649268a8316ff678fb575ad7bd25e9bbed88908ed 3e21f84a1ba01a70ec79c115be5f44eea08fb52aaa05a1594812609ebeae4f27 39100551d498721e0372891bb0b5176720c0315c8f21db32ad70ce2cf9fdf53f: (13.291096705s)
	W0815 01:19:00.938173   63084 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2 65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d 5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da 2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657 e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9 7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2 618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a 75c4acd722339192c314d4a56d694984a3727fcea92ca1a0453ca7fae22aa897 5dd791247d2d708d075f10a649268a8316ff678fb575ad7bd25e9bbed88908ed 3e21f84a1ba01a70ec79c115be5f44eea08fb52aaa05a1594812609ebeae4f27 39100551d498721e0372891bb0b5176720c0315c8f21db32ad70ce2cf9fdf53f: Process exited with status 1
	stdout:
	b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2
	65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d
	5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da
	2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657
	e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9
	
	stderr:
	E0815 01:19:00.924186    3031 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2\": container with ID starting with 7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2 not found: ID does not exist" containerID="7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2"
	time="2024-08-15T01:19:00Z" level=fatal msg="stopping the container \"7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2\": rpc error: code = NotFound desc = could not find container \"7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2\": container with ID starting with 7328d521e23f6a8ab02fd2c584d5bade4a58db9569075e12e890730f95355aa2 not found: ID does not exist"
	I0815 01:19:00.938239   63084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:19:00.974389   63084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:19:00.984264   63084 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug 15 01:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Aug 15 01:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Aug 15 01:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Aug 15 01:17 /etc/kubernetes/scheduler.conf
	
	I0815 01:19:00.984323   63084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:19:00.993015   63084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:19:01.001572   63084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:19:01.010604   63084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:19:01.010654   63084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:19:01.019386   63084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:19:01.027809   63084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:19:01.027874   63084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:19:01.036642   63084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:19:01.046393   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:01.099935   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:01.807209   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:02.019534   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:02.087772   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:02.178235   63084 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:19:02.178327   63084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:02.679045   63084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:18:57.884306   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:18:57.884785   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:18:57.884813   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:18:57.884736   63332 retry.go:31] will retry after 2.214242479s: waiting for machine to come up
	I0815 01:19:00.101595   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:00.102182   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:19:00.102211   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:19:00.102130   63332 retry.go:31] will retry after 2.956379186s: waiting for machine to come up
	I0815 01:19:03.178516   63084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:03.192493   63084 api_server.go:72] duration metric: took 1.014279562s to wait for apiserver process to appear ...
	I0815 01:19:03.192523   63084 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:19:03.192540   63084 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0815 01:19:05.229477   63084 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:19:05.229503   63084 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:19:05.229516   63084 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0815 01:19:05.246854   63084 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:19:05.246876   63084 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:19:05.693117   63084 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0815 01:19:05.697653   63084 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:19:05.697689   63084 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:19:06.193048   63084 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0815 01:19:06.197520   63084 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:19:06.197551   63084 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:19:06.692751   63084 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0815 01:19:06.696674   63084 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0815 01:19:06.702744   63084 api_server.go:141] control plane version: v1.31.0
	I0815 01:19:06.702765   63084 api_server.go:131] duration metric: took 3.510235751s to wait for apiserver health ...
	I0815 01:19:06.702774   63084 cni.go:84] Creating CNI manager for ""
	I0815 01:19:06.702780   63084 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:19:06.704841   63084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:19:06.705929   63084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:19:06.715945   63084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:19:06.732452   63084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:19:06.732520   63084 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 01:19:06.732544   63084 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 01:19:06.740922   63084 system_pods.go:59] 6 kube-system pods found
	I0815 01:19:06.740962   63084 system_pods.go:61] "coredns-6f6b679f8f-gh5hb" [c05c76ba-24ca-4a03-8e94-52391b4ab036] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:19:06.740976   63084 system_pods.go:61] "etcd-pause-064537" [8c39e488-2339-4b28-bf0f-e01e3fa55fc9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:19:06.740987   63084 system_pods.go:61] "kube-apiserver-pause-064537" [fc53227f-bae3-4591-aa7a-6646f81a49bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:19:06.741001   63084 system_pods.go:61] "kube-controller-manager-pause-064537" [1758ac28-2b2e-4f76-a3e8-0aa64241c05d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:19:06.741008   63084 system_pods.go:61] "kube-proxy-jkgw5" [e749136f-57bd-41a0-aa1c-1d12c05445a4] Running
	I0815 01:19:06.741018   63084 system_pods.go:61] "kube-scheduler-pause-064537" [0fa69c33-02ff-497d-b53c-80e815733d40] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:19:06.741029   63084 system_pods.go:74] duration metric: took 8.558245ms to wait for pod list to return data ...
	I0815 01:19:06.741039   63084 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:19:06.745233   63084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:19:06.745262   63084 node_conditions.go:123] node cpu capacity is 2
	I0815 01:19:06.745272   63084 node_conditions.go:105] duration metric: took 4.226926ms to run NodePressure ...
	I0815 01:19:06.745288   63084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:07.002576   63084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:19:07.007579   63084 kubeadm.go:739] kubelet initialised
	I0815 01:19:07.007602   63084 kubeadm.go:740] duration metric: took 5.003617ms waiting for restarted kubelet to initialise ...
	I0815 01:19:07.007609   63084 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:19:07.011873   63084 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:03.059540   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:03.059927   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | unable to find current IP address of domain kubernetes-upgrade-146394 in network mk-kubernetes-upgrade-146394
	I0815 01:19:03.059955   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | I0815 01:19:03.059877   63332 retry.go:31] will retry after 4.353508843s: waiting for machine to come up
	I0815 01:19:07.418293   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.418802   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has current primary IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.418824   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Found IP for machine: 192.168.72.130
	I0815 01:19:07.418839   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Reserving static IP address...
	I0815 01:19:07.419270   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-146394", mac: "52:54:00:c0:3a:c8", ip: "192.168.72.130"} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.419315   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | skip adding static IP to network mk-kubernetes-upgrade-146394 - found existing host DHCP lease matching {name: "kubernetes-upgrade-146394", mac: "52:54:00:c0:3a:c8", ip: "192.168.72.130"}
	I0815 01:19:07.419335   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Reserved static IP address: 192.168.72.130
	I0815 01:19:07.419349   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Waiting for SSH to be available...
	I0815 01:19:07.419356   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Getting to WaitForSSH function...
	I0815 01:19:07.421472   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.421946   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.421973   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.422085   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Using SSH client type: external
	I0815 01:19:07.422105   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa (-rw-------)
	I0815 01:19:07.422130   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:19:07.422139   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | About to run SSH command:
	I0815 01:19:07.422148   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | exit 0
	I0815 01:19:07.548671   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | SSH cmd err, output: <nil>: 
	I0815 01:19:07.549030   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetConfigRaw
	I0815 01:19:07.549687   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:19:07.552155   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.552437   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.552464   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.552748   63299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/config.json ...
	I0815 01:19:07.552966   63299 machine.go:94] provisionDockerMachine start ...
	I0815 01:19:07.552985   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:19:07.553213   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:07.556009   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.556439   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.556469   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.556615   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:07.556826   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:07.557012   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:07.557197   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:07.557450   63299 main.go:141] libmachine: Using SSH client type: native
	I0815 01:19:07.557714   63299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:19:07.557728   63299 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:19:07.664550   63299 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:19:07.664580   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetMachineName
	I0815 01:19:07.664959   63299 buildroot.go:166] provisioning hostname "kubernetes-upgrade-146394"
	I0815 01:19:07.664984   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetMachineName
	I0815 01:19:07.665170   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:07.667696   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.668060   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.668093   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.668196   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:07.668377   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:07.668584   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:07.668761   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:07.668914   63299 main.go:141] libmachine: Using SSH client type: native
	I0815 01:19:07.669080   63299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:19:07.669097   63299 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-146394 && echo "kubernetes-upgrade-146394" | sudo tee /etc/hostname
	I0815 01:19:07.786879   63299 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-146394
	
	I0815 01:19:07.786907   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:07.789615   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.790001   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.790040   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.790246   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:07.790477   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:07.790637   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:07.790830   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:07.791002   63299 main.go:141] libmachine: Using SSH client type: native
	I0815 01:19:07.791193   63299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:19:07.791211   63299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-146394' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-146394/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-146394' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:19:07.904846   63299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:19:07.904872   63299 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:19:07.904896   63299 buildroot.go:174] setting up certificates
	I0815 01:19:07.904909   63299 provision.go:84] configureAuth start
	I0815 01:19:07.904921   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetMachineName
	I0815 01:19:07.905202   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:19:07.908466   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.908919   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.908961   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.909047   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:07.911366   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.911662   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:07.911690   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:07.911809   63299 provision.go:143] copyHostCerts
	I0815 01:19:07.911863   63299 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:19:07.911884   63299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:19:07.911955   63299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:19:07.912098   63299 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:19:07.912111   63299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:19:07.912141   63299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:19:07.912224   63299 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:19:07.912235   63299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:19:07.912264   63299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:19:07.912343   63299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-146394 san=[127.0.0.1 192.168.72.130 kubernetes-upgrade-146394 localhost minikube]
	I0815 01:19:08.089615   63299 provision.go:177] copyRemoteCerts
	I0815 01:19:08.089694   63299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:19:08.089731   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:08.092416   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.092805   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.092840   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.093010   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:08.093204   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.093366   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:08.093546   63299 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa Username:docker}
	I0815 01:19:08.178776   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:19:08.201276   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0815 01:19:08.223204   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:19:08.244961   63299 provision.go:87] duration metric: took 340.040199ms to configureAuth
	I0815 01:19:08.244989   63299 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:19:08.245211   63299 config.go:182] Loaded profile config "kubernetes-upgrade-146394": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:19:08.245293   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:08.247759   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.248141   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.248183   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.248383   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:08.248539   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.248701   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.248818   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:08.249010   63299 main.go:141] libmachine: Using SSH client type: native
	I0815 01:19:08.249254   63299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:19:08.249283   63299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:19:08.510629   63299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:19:08.510659   63299 machine.go:97] duration metric: took 957.678971ms to provisionDockerMachine
	I0815 01:19:08.510675   63299 start.go:293] postStartSetup for "kubernetes-upgrade-146394" (driver="kvm2")
	I0815 01:19:08.510705   63299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:19:08.510739   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:19:08.511061   63299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:19:08.511088   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:08.514111   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.514688   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.514721   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.514879   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:08.515138   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.515338   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:08.515502   63299 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa Username:docker}
	I0815 01:19:08.603211   63299 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:19:08.608152   63299 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:19:08.608178   63299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:19:08.608244   63299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:19:08.608354   63299 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:19:08.608482   63299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:19:08.620189   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:19:08.643934   63299 start.go:296] duration metric: took 133.24574ms for postStartSetup
	I0815 01:19:08.643971   63299 fix.go:56] duration metric: took 20.701095201s for fixHost
	I0815 01:19:08.643989   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:08.647018   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.647369   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.647411   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.647529   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:08.647730   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.647904   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.648090   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:08.648298   63299 main.go:141] libmachine: Using SSH client type: native
	I0815 01:19:08.648524   63299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0815 01:19:08.648541   63299 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:19:08.753219   63299 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723684748.714418371
	
	I0815 01:19:08.753250   63299 fix.go:216] guest clock: 1723684748.714418371
	I0815 01:19:08.753260   63299 fix.go:229] Guest: 2024-08-15 01:19:08.714418371 +0000 UTC Remote: 2024-08-15 01:19:08.643974847 +0000 UTC m=+20.852833463 (delta=70.443524ms)
	I0815 01:19:08.753291   63299 fix.go:200] guest clock delta is within tolerance: 70.443524ms
	I0815 01:19:08.753297   63299 start.go:83] releasing machines lock for "kubernetes-upgrade-146394", held for 20.810435446s
	I0815 01:19:08.753317   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:19:08.753575   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:19:08.756453   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.756792   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.756821   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.757006   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:19:08.757522   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:19:08.757681   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .DriverName
	I0815 01:19:08.757800   63299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:19:08.757841   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:08.757870   63299 ssh_runner.go:195] Run: cat /version.json
	I0815 01:19:08.757891   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHHostname
	I0815 01:19:08.760766   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.760789   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.761131   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.761173   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.761200   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:08.761213   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:08.761314   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:08.761511   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHPort
	I0815 01:19:08.761676   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.761677   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHKeyPath
	I0815 01:19:08.761837   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:08.761844   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetSSHUsername
	I0815 01:19:08.761983   63299 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa Username:docker}
	I0815 01:19:08.761983   63299 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kubernetes-upgrade-146394/id_rsa Username:docker}
	I0815 01:19:08.841502   63299 ssh_runner.go:195] Run: systemctl --version
	I0815 01:19:08.874089   63299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:19:09.020579   63299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:19:09.026562   63299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:19:09.026633   63299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:19:09.044643   63299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:19:09.044686   63299 start.go:495] detecting cgroup driver to use...
	I0815 01:19:09.044761   63299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:19:09.061861   63299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:19:09.076296   63299 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:19:09.076378   63299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:19:09.090842   63299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:19:09.103631   63299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:19:09.216218   63299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:19:09.390461   63299 docker.go:233] disabling docker service ...
	I0815 01:19:09.390530   63299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:19:09.404627   63299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:19:09.417547   63299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:19:09.546132   63299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:19:09.661977   63299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:19:09.675076   63299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:19:09.692400   63299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:19:09.692474   63299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.702269   63299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:19:09.702333   63299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.712748   63299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.722789   63299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.732589   63299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:19:09.742711   63299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.752460   63299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.768474   63299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:19:09.778549   63299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:19:09.787207   63299 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:19:09.787262   63299 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:19:09.798967   63299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:19:09.808021   63299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:19:09.930178   63299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:19:10.065578   63299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:19:10.065664   63299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:19:10.070333   63299 start.go:563] Will wait 60s for crictl version
	I0815 01:19:10.070388   63299 ssh_runner.go:195] Run: which crictl
	I0815 01:19:10.074236   63299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:19:10.121834   63299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:19:10.121957   63299 ssh_runner.go:195] Run: crio --version
	I0815 01:19:10.150305   63299 ssh_runner.go:195] Run: crio --version
	I0815 01:19:10.180341   63299 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:19:09.019262   63084 pod_ready.go:102] pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace has status "Ready":"False"
	I0815 01:19:11.023157   63084 pod_ready.go:102] pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace has status "Ready":"False"
	I0815 01:19:11.517731   63084 pod_ready.go:92] pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:11.517751   63084 pod_ready.go:81] duration metric: took 4.505856705s for pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:11.517761   63084 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:12.525658   63084 pod_ready.go:92] pod "etcd-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:12.525687   63084 pod_ready.go:81] duration metric: took 1.007917532s for pod "etcd-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:12.525700   63084 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:10.181492   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) Calling .GetIP
	I0815 01:19:10.184071   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:10.184494   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:3a:c8", ip: ""} in network mk-kubernetes-upgrade-146394: {Iface:virbr4 ExpiryTime:2024-08-15 02:18:58 +0000 UTC Type:0 Mac:52:54:00:c0:3a:c8 Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:kubernetes-upgrade-146394 Clientid:01:52:54:00:c0:3a:c8}
	I0815 01:19:10.184524   63299 main.go:141] libmachine: (kubernetes-upgrade-146394) DBG | domain kubernetes-upgrade-146394 has defined IP address 192.168.72.130 and MAC address 52:54:00:c0:3a:c8 in network mk-kubernetes-upgrade-146394
	I0815 01:19:10.184752   63299 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 01:19:10.188580   63299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:19:10.200739   63299 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:19:10.200863   63299 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:19:10.200925   63299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:19:10.246754   63299 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:19:10.246824   63299 ssh_runner.go:195] Run: which lz4
	I0815 01:19:10.250585   63299 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:19:10.254389   63299 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:19:10.254419   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:19:11.482693   63299 crio.go:462] duration metric: took 1.232142964s to copy over tarball
	I0815 01:19:11.482763   63299 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:19:14.033094   63084 pod_ready.go:92] pod "kube-apiserver-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:14.033116   63084 pod_ready.go:81] duration metric: took 1.507407378s for pod "kube-apiserver-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:14.033126   63084 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:16.095360   63084 pod_ready.go:102] pod "kube-controller-manager-pause-064537" in "kube-system" namespace has status "Ready":"False"
	I0815 01:19:13.477196   63299 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.994397565s)
	I0815 01:19:13.477240   63299 crio.go:469] duration metric: took 1.994514s to extract the tarball
	I0815 01:19:13.477251   63299 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:19:13.514404   63299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:19:13.559065   63299 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:19:13.559088   63299 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:19:13.559095   63299 kubeadm.go:934] updating node { 192.168.72.130 8443 v1.31.0 crio true true} ...
	I0815 01:19:13.559189   63299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-146394 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:19:13.559246   63299 ssh_runner.go:195] Run: crio config
	I0815 01:19:13.601202   63299 cni.go:84] Creating CNI manager for ""
	I0815 01:19:13.601221   63299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:19:13.601230   63299 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:19:13.601252   63299 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-146394 NodeName:kubernetes-upgrade-146394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:19:13.601431   63299 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-146394"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:19:13.601505   63299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:19:13.610710   63299 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:19:13.610778   63299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:19:13.619166   63299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0815 01:19:13.634115   63299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:19:13.649083   63299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0815 01:19:13.664194   63299 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I0815 01:19:13.667528   63299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:19:13.678568   63299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:19:13.810749   63299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:19:13.827580   63299 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394 for IP: 192.168.72.130
	I0815 01:19:13.827605   63299 certs.go:194] generating shared ca certs ...
	I0815 01:19:13.827632   63299 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:19:13.827813   63299 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:19:13.827870   63299 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:19:13.827883   63299 certs.go:256] generating profile certs ...
	I0815 01:19:13.828000   63299 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.key
	I0815 01:19:13.828070   63299 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.key.6a0a8e0c
	I0815 01:19:13.828120   63299 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.key
	I0815 01:19:13.828250   63299 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:19:13.828284   63299 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:19:13.828298   63299 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:19:13.828330   63299 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:19:13.828359   63299 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:19:13.828388   63299 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:19:13.828443   63299 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:19:13.829289   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:19:13.855426   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:19:13.884301   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:19:13.929511   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:19:13.954894   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0815 01:19:13.979301   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:19:14.006175   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:19:14.032056   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:19:14.055525   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:19:14.077374   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:19:14.099513   63299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:19:14.121376   63299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:19:14.136538   63299 ssh_runner.go:195] Run: openssl version
	I0815 01:19:14.141756   63299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:19:14.152216   63299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:19:14.157453   63299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:19:14.157509   63299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:19:14.163023   63299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:19:14.172517   63299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:19:14.182240   63299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:19:14.186143   63299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:19:14.186195   63299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:19:14.191381   63299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:19:14.200965   63299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:19:14.210383   63299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:19:14.214168   63299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:19:14.214218   63299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:19:14.219301   63299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:19:14.228809   63299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:19:14.232583   63299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:19:14.237955   63299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:19:14.243236   63299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:19:14.248649   63299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:19:14.254071   63299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:19:14.259492   63299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:19:14.265009   63299 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-146394 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-146394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:19:14.265121   63299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:19:14.265167   63299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:19:14.304190   63299 cri.go:89] found id: ""
	I0815 01:19:14.304258   63299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:19:14.313686   63299 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:19:14.313702   63299 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:19:14.313743   63299 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:19:14.322755   63299 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:19:14.323399   63299 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-146394" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:19:14.323745   63299 kubeconfig.go:62] /home/jenkins/minikube-integration/19443-13088/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-146394" cluster setting kubeconfig missing "kubernetes-upgrade-146394" context setting]
	I0815 01:19:14.324205   63299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:19:14.325085   63299 kapi.go:59] client config for kubernetes-upgrade-146394: &rest.Config{Host:"https://192.168.72.130:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.crt", KeyFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kubernetes-upgrade-146394/client.key", CAFile:"/home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 01:19:14.325686   63299 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:19:14.335018   63299 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta2
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.72.130
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-146394"
	   kubeletExtraArgs:
	     node-ip: 192.168.72.130
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta2
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	@@ -33,14 +33,12 @@
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.20.0
	+kubernetesVersion: v1.31.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	@@ -52,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I0815 01:19:14.335034   63299 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:19:14.335046   63299 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:19:14.335084   63299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:19:14.372825   63299 cri.go:89] found id: ""
	I0815 01:19:14.372885   63299 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:19:14.389545   63299 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:19:14.399792   63299 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:19:14.399807   63299 kubeadm.go:157] found existing configuration files:
	
	I0815 01:19:14.399869   63299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:19:14.409132   63299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:19:14.409186   63299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:19:14.419123   63299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:19:14.427390   63299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:19:14.427441   63299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:19:14.435793   63299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:19:14.443587   63299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:19:14.443645   63299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:19:14.452206   63299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:19:14.459942   63299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:19:14.459978   63299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:19:14.468240   63299 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:19:14.477239   63299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:14.585380   63299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:15.971753   63299 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.386339993s)
	I0815 01:19:15.971783   63299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:16.194708   63299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:16.260257   63299 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:19:16.366713   63299 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:19:16.366788   63299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:16.867529   63299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:17.367658   63299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:18.039433   63084 pod_ready.go:92] pod "kube-controller-manager-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:18.039453   63084 pod_ready.go:81] duration metric: took 4.006320243s for pod "kube-controller-manager-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.039463   63084 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jkgw5" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.043917   63084 pod_ready.go:92] pod "kube-proxy-jkgw5" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:18.043932   63084 pod_ready.go:81] duration metric: took 4.462973ms for pod "kube-proxy-jkgw5" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.043940   63084 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.048312   63084 pod_ready.go:92] pod "kube-scheduler-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:18.048416   63084 pod_ready.go:81] duration metric: took 4.461567ms for pod "kube-scheduler-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.048439   63084 pod_ready.go:38] duration metric: took 11.040820763s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:19:18.048457   63084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:19:18.059929   63084 ops.go:34] apiserver oom_adj: -16
	I0815 01:19:18.059945   63084 kubeadm.go:597] duration metric: took 30.485173576s to restartPrimaryControlPlane
	I0815 01:19:18.059955   63084 kubeadm.go:394] duration metric: took 30.629506931s to StartCluster
	I0815 01:19:18.059972   63084 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:19:18.060056   63084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:19:18.061228   63084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:19:18.061441   63084 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:19:18.061508   63084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:19:18.061686   63084 config.go:182] Loaded profile config "pause-064537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:19:18.063294   63084 out.go:177] * Enabled addons: 
	I0815 01:19:18.063317   63084 out.go:177] * Verifying Kubernetes components...
	I0815 01:19:18.064508   63084 addons.go:510] duration metric: took 3.000402ms for enable addons: enabled=[]
	I0815 01:19:18.064602   63084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:19:18.223949   63084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:19:18.238421   63084 node_ready.go:35] waiting up to 6m0s for node "pause-064537" to be "Ready" ...
	I0815 01:19:18.241682   63084 node_ready.go:49] node "pause-064537" has status "Ready":"True"
	I0815 01:19:18.241707   63084 node_ready.go:38] duration metric: took 3.251397ms for node "pause-064537" to be "Ready" ...
	I0815 01:19:18.241727   63084 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:19:18.246810   63084 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.251879   63084 pod_ready.go:92] pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:18.251903   63084 pod_ready.go:81] duration metric: took 5.061525ms for pod "coredns-6f6b679f8f-gh5hb" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.251914   63084 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.438780   63084 pod_ready.go:92] pod "etcd-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:18.438808   63084 pod_ready.go:81] duration metric: took 186.883645ms for pod "etcd-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.438820   63084 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.838486   63084 pod_ready.go:92] pod "kube-apiserver-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:18.838515   63084 pod_ready.go:81] duration metric: took 399.686358ms for pod "kube-apiserver-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:18.838529   63084 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:19.237366   63084 pod_ready.go:92] pod "kube-controller-manager-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:19.237390   63084 pod_ready.go:81] duration metric: took 398.85405ms for pod "kube-controller-manager-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:19.237400   63084 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jkgw5" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:19.637642   63084 pod_ready.go:92] pod "kube-proxy-jkgw5" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:19.637666   63084 pod_ready.go:81] duration metric: took 400.25949ms for pod "kube-proxy-jkgw5" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:19.637675   63084 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:20.037472   63084 pod_ready.go:92] pod "kube-scheduler-pause-064537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:19:20.037505   63084 pod_ready.go:81] duration metric: took 399.822028ms for pod "kube-scheduler-pause-064537" in "kube-system" namespace to be "Ready" ...
	I0815 01:19:20.037515   63084 pod_ready.go:38] duration metric: took 1.79577475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:19:20.037551   63084 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:19:20.037620   63084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:20.055453   63084 api_server.go:72] duration metric: took 1.993983569s to wait for apiserver process to appear ...
	I0815 01:19:20.055478   63084 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:19:20.055501   63084 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0815 01:19:20.062554   63084 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0815 01:19:20.063800   63084 api_server.go:141] control plane version: v1.31.0
	I0815 01:19:20.063820   63084 api_server.go:131] duration metric: took 8.334057ms to wait for apiserver health ...
	I0815 01:19:20.063830   63084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:19:20.239251   63084 system_pods.go:59] 6 kube-system pods found
	I0815 01:19:20.239282   63084 system_pods.go:61] "coredns-6f6b679f8f-gh5hb" [c05c76ba-24ca-4a03-8e94-52391b4ab036] Running
	I0815 01:19:20.239289   63084 system_pods.go:61] "etcd-pause-064537" [8c39e488-2339-4b28-bf0f-e01e3fa55fc9] Running
	I0815 01:19:20.239294   63084 system_pods.go:61] "kube-apiserver-pause-064537" [fc53227f-bae3-4591-aa7a-6646f81a49bd] Running
	I0815 01:19:20.239299   63084 system_pods.go:61] "kube-controller-manager-pause-064537" [1758ac28-2b2e-4f76-a3e8-0aa64241c05d] Running
	I0815 01:19:20.239304   63084 system_pods.go:61] "kube-proxy-jkgw5" [e749136f-57bd-41a0-aa1c-1d12c05445a4] Running
	I0815 01:19:20.239308   63084 system_pods.go:61] "kube-scheduler-pause-064537" [0fa69c33-02ff-497d-b53c-80e815733d40] Running
	I0815 01:19:20.239316   63084 system_pods.go:74] duration metric: took 175.478885ms to wait for pod list to return data ...
	I0815 01:19:20.239334   63084 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:19:20.437715   63084 default_sa.go:45] found service account: "default"
	I0815 01:19:20.437744   63084 default_sa.go:55] duration metric: took 198.402501ms for default service account to be created ...
	I0815 01:19:20.437755   63084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:19:20.640145   63084 system_pods.go:86] 6 kube-system pods found
	I0815 01:19:20.640186   63084 system_pods.go:89] "coredns-6f6b679f8f-gh5hb" [c05c76ba-24ca-4a03-8e94-52391b4ab036] Running
	I0815 01:19:20.640194   63084 system_pods.go:89] "etcd-pause-064537" [8c39e488-2339-4b28-bf0f-e01e3fa55fc9] Running
	I0815 01:19:20.640199   63084 system_pods.go:89] "kube-apiserver-pause-064537" [fc53227f-bae3-4591-aa7a-6646f81a49bd] Running
	I0815 01:19:20.640203   63084 system_pods.go:89] "kube-controller-manager-pause-064537" [1758ac28-2b2e-4f76-a3e8-0aa64241c05d] Running
	I0815 01:19:20.640208   63084 system_pods.go:89] "kube-proxy-jkgw5" [e749136f-57bd-41a0-aa1c-1d12c05445a4] Running
	I0815 01:19:20.640212   63084 system_pods.go:89] "kube-scheduler-pause-064537" [0fa69c33-02ff-497d-b53c-80e815733d40] Running
	I0815 01:19:20.640219   63084 system_pods.go:126] duration metric: took 202.458517ms to wait for k8s-apps to be running ...
	I0815 01:19:20.640227   63084 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:19:20.640288   63084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:19:20.655626   63084 system_svc.go:56] duration metric: took 15.388152ms WaitForService to wait for kubelet
	I0815 01:19:20.655659   63084 kubeadm.go:582] duration metric: took 2.594193144s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:19:20.655681   63084 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:19:20.837367   63084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:19:20.837399   63084 node_conditions.go:123] node cpu capacity is 2
	I0815 01:19:20.837412   63084 node_conditions.go:105] duration metric: took 181.72528ms to run NodePressure ...
	I0815 01:19:20.837427   63084 start.go:241] waiting for startup goroutines ...
	I0815 01:19:20.837437   63084 start.go:246] waiting for cluster config update ...
	I0815 01:19:20.837445   63084 start.go:255] writing updated cluster config ...
	I0815 01:19:20.837760   63084 ssh_runner.go:195] Run: rm -f paused
	I0815 01:19:20.887164   63084 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:19:20.889269   63084 out.go:177] * Done! kubectl is now configured to use "pause-064537" cluster and "default" namespace by default
	I0815 01:19:17.867155   63299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:18.366891   63299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:18.867370   63299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:19:18.882413   63299 api_server.go:72] duration metric: took 2.515707651s to wait for apiserver process to appear ...
	I0815 01:19:18.882435   63299 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:19:18.882456   63299 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0815 01:19:20.940371   63299 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:19:20.940397   63299 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:19:20.940407   63299 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0815 01:19:20.970546   63299 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:19:20.970575   63299 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:19:21.383121   63299 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0815 01:19:21.387468   63299 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:19:21.387489   63299 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:19:21.883122   63299 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0815 01:19:21.890290   63299 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:19:21.890323   63299 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:19:22.383482   63299 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0815 01:19:22.391318   63299 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:19:22.391350   63299 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:19:22.883327   63299 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0815 01:19:22.888752   63299 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0815 01:19:22.895811   63299 api_server.go:141] control plane version: v1.31.0
	I0815 01:19:22.895841   63299 api_server.go:131] duration metric: took 4.013398308s to wait for apiserver health ...
	I0815 01:19:22.895852   63299 cni.go:84] Creating CNI manager for ""
	I0815 01:19:22.895860   63299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:19:22.897398   63299 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.446044576Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684763446020206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60d46917-239a-4998-a7e1-382d0491f747 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.446951324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f2c852f-9383-4ef8-ae11-8329b05eb254 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.447052583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f2c852f-9383-4ef8-ae11-8329b05eb254 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.448697522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47863b4ea2f8d913fdf7cbb5f0041cd0df0f641022c6baeb306212e0deaf911b,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684746435180864,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f20a0bc1b3951593018f5b971220316469b8a9b84793426ad9e61a4629056,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723684742632809504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb15264d7125766e9ca5fae54c2d596f8f938c054944ab242a1c7d18381cba44,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723684742686216021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c110
1592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0e4e057a27a303de0ea9b2cc8b1234376aae9d629d3c4d79e228d540d904c7,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723684742646427651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa
95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73e18991895ee3a304da7d9f717d443cac3579f116a551edd2fdb5490e59556,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723684742624474261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,}
,Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83469d0f301fac5dfb4c6cb368c0c3bd49b17dc9accd7de423fbfcd8f20d21de,PodSandboxId:9dfbaae81b04735b0bdef9be22ebc6b517e8e3e7cb2722a1c8194a36b53e5084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723684726750383867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684727273916867,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723684726388612626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c1101592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723684726393276631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723684726298729502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723684726294020733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a,PodSandboxId:9aa15f7f056acb4bc089f05ac8510f19df4e0eafa7612abfdc1169402b013855,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723684673762328170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f2c852f-9383-4ef8-ae11-8329b05eb254 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.503045726Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc5360ff-cc0b-4175-b47c-1d15f5c56666 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.503184848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc5360ff-cc0b-4175-b47c-1d15f5c56666 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.504631628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2ad588a-3a3d-4445-b162-150c8c92a76e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.505388194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684763505196648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2ad588a-3a3d-4445-b162-150c8c92a76e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.506239680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ac6461a-9114-4756-a899-e46a0037fd27 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.506318500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ac6461a-9114-4756-a899-e46a0037fd27 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.506869939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47863b4ea2f8d913fdf7cbb5f0041cd0df0f641022c6baeb306212e0deaf911b,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684746435180864,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f20a0bc1b3951593018f5b971220316469b8a9b84793426ad9e61a4629056,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723684742632809504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb15264d7125766e9ca5fae54c2d596f8f938c054944ab242a1c7d18381cba44,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723684742686216021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c110
1592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0e4e057a27a303de0ea9b2cc8b1234376aae9d629d3c4d79e228d540d904c7,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723684742646427651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa
95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73e18991895ee3a304da7d9f717d443cac3579f116a551edd2fdb5490e59556,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723684742624474261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,}
,Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83469d0f301fac5dfb4c6cb368c0c3bd49b17dc9accd7de423fbfcd8f20d21de,PodSandboxId:9dfbaae81b04735b0bdef9be22ebc6b517e8e3e7cb2722a1c8194a36b53e5084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723684726750383867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684727273916867,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723684726388612626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c1101592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723684726393276631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723684726298729502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723684726294020733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a,PodSandboxId:9aa15f7f056acb4bc089f05ac8510f19df4e0eafa7612abfdc1169402b013855,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723684673762328170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ac6461a-9114-4756-a899-e46a0037fd27 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.552374522Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a174a45f-c01f-4a2a-a6f1-daa6d03f3281 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.552468238Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a174a45f-c01f-4a2a-a6f1-daa6d03f3281 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.553543630Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8dbad265-744a-4bd8-baff-30c62872c12d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.553923536Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684763553899941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8dbad265-744a-4bd8-baff-30c62872c12d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.554417722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e12fdb59-b116-4606-b12d-462a745b320f name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.554471585Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e12fdb59-b116-4606-b12d-462a745b320f name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.554850222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47863b4ea2f8d913fdf7cbb5f0041cd0df0f641022c6baeb306212e0deaf911b,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684746435180864,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f20a0bc1b3951593018f5b971220316469b8a9b84793426ad9e61a4629056,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723684742632809504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb15264d7125766e9ca5fae54c2d596f8f938c054944ab242a1c7d18381cba44,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723684742686216021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c110
1592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0e4e057a27a303de0ea9b2cc8b1234376aae9d629d3c4d79e228d540d904c7,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723684742646427651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa
95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73e18991895ee3a304da7d9f717d443cac3579f116a551edd2fdb5490e59556,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723684742624474261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,}
,Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83469d0f301fac5dfb4c6cb368c0c3bd49b17dc9accd7de423fbfcd8f20d21de,PodSandboxId:9dfbaae81b04735b0bdef9be22ebc6b517e8e3e7cb2722a1c8194a36b53e5084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723684726750383867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684727273916867,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723684726388612626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c1101592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723684726393276631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723684726298729502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723684726294020733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a,PodSandboxId:9aa15f7f056acb4bc089f05ac8510f19df4e0eafa7612abfdc1169402b013855,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723684673762328170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e12fdb59-b116-4606-b12d-462a745b320f name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.598910540Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb261478-860f-42e0-a263-f573a393931a name=/runtime.v1.RuntimeService/Version
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.598991077Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb261478-860f-42e0-a263-f573a393931a name=/runtime.v1.RuntimeService/Version
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.600645952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b62d3965-0b86-493b-90a1-20f67571da40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.601035745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684763601011485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b62d3965-0b86-493b-90a1-20f67571da40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.601722049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14810e6e-55d1-473b-a532-c90c07f4131a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.601776612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14810e6e-55d1-473b-a532-c90c07f4131a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:19:23 pause-064537 crio[2356]: time="2024-08-15 01:19:23.602027987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47863b4ea2f8d913fdf7cbb5f0041cd0df0f641022c6baeb306212e0deaf911b,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723684746435180864,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f20a0bc1b3951593018f5b971220316469b8a9b84793426ad9e61a4629056,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723684742632809504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb15264d7125766e9ca5fae54c2d596f8f938c054944ab242a1c7d18381cba44,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723684742686216021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c110
1592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0e4e057a27a303de0ea9b2cc8b1234376aae9d629d3c4d79e228d540d904c7,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723684742646427651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa
95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73e18991895ee3a304da7d9f717d443cac3579f116a551edd2fdb5490e59556,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723684742624474261,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,}
,Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83469d0f301fac5dfb4c6cb368c0c3bd49b17dc9accd7de423fbfcd8f20d21de,PodSandboxId:9dfbaae81b04735b0bdef9be22ebc6b517e8e3e7cb2722a1c8194a36b53e5084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723684726750383867,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2,PodSandboxId:a4858a59d13892d346ac2868b9f3a4c9b5d55d21817bb5afa38d0fb1302c1d5f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723684727273916867,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gh5hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c05c76ba-24ca-4a03-8e94-52391b4ab036,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da,PodSandboxId:9405237eeeaeaa3516f3085998bf5b755d770a6609754804c1a70a95aee30cf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723684726388612626,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
pause-064537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e7be76a9c4e873c0614c1101592575,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d,PodSandboxId:c58432f91fc8d75dfbb130e6e34cbc478d6000f2113f40142cb6ae0ea787fd02,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723684726393276631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-064537,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847411acd76806da7ec28f8913f4d5fa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657,PodSandboxId:584a51b67fa8c45ba37a74745548ec4cf2d46139e6fa9b8512ef7d0e067b2426,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723684726298729502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-064537,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 0b30e55b5a8f76cb420f732b02ab8fbb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9,PodSandboxId:d7637483c0794a8fbf0a019ac1985597df5ec909830e789fc0e5081cc8ecdf86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723684726294020733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-064537,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: eb3ed9bf63f0c0aa95b78896f2b0f6a3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a,PodSandboxId:9aa15f7f056acb4bc089f05ac8510f19df4e0eafa7612abfdc1169402b013855,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723684673762328170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkgw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e749136f-57bd-41a0-aa1c-1d12c05445a4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14810e6e-55d1-473b-a532-c90c07f4131a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	47863b4ea2f8d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 seconds ago       Running             coredns                   2                   a4858a59d1389       coredns-6f6b679f8f-gh5hb
	cb15264d71257       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   20 seconds ago       Running             etcd                      2                   9405237eeeaea       etcd-pause-064537
	2d0e4e057a27a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   21 seconds ago       Running             kube-controller-manager   2                   d7637483c0794       kube-controller-manager-pause-064537
	804f20a0bc1b3       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   21 seconds ago       Running             kube-scheduler            2                   c58432f91fc8d       kube-scheduler-pause-064537
	c73e18991895e       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 seconds ago       Running             kube-apiserver            2                   584a51b67fa8c       kube-apiserver-pause-064537
	b32a9de8341b7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago       Exited              coredns                   1                   a4858a59d1389       coredns-6f6b679f8f-gh5hb
	83469d0f301fa       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   36 seconds ago       Running             kube-proxy                1                   9dfbaae81b047       kube-proxy-jkgw5
	65bdb78cfa5fb       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   37 seconds ago       Exited              kube-scheduler            1                   c58432f91fc8d       kube-scheduler-pause-064537
	5c6b7316f2555       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   37 seconds ago       Exited              etcd                      1                   9405237eeeaea       etcd-pause-064537
	2e0a1e80f7a7f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   37 seconds ago       Exited              kube-apiserver            1                   584a51b67fa8c       kube-apiserver-pause-064537
	e3eb9eded28db       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   37 seconds ago       Exited              kube-controller-manager   1                   d7637483c0794       kube-controller-manager-pause-064537
	618741a2dad66       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Exited              kube-proxy                0                   9aa15f7f056ac       kube-proxy-jkgw5
	
	
	==> coredns [47863b4ea2f8d913fdf7cbb5f0041cd0df0f641022c6baeb306212e0deaf911b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35450 - 33988 "HINFO IN 7249281135217051061.6073457569363123844. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009831687s
	
	
	==> coredns [b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2] <==
	
	
	==> describe nodes <==
	Name:               pause-064537
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-064537
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=pause-064537
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T01_17_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 01:17:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-064537
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:19:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 01:19:05 +0000   Thu, 15 Aug 2024 01:17:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 01:19:05 +0000   Thu, 15 Aug 2024 01:17:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 01:19:05 +0000   Thu, 15 Aug 2024 01:17:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 01:19:05 +0000   Thu, 15 Aug 2024 01:17:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.243
	  Hostname:    pause-064537
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7bf3990181f484babb24cff6639b727
	  System UUID:                b7bf3990-181f-484b-abb2-4cff6639b727
	  Boot ID:                    eaa23830-0bee-4122-848c-6beb45e711c3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-gh5hb                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     90s
	  kube-system                 etcd-pause-064537                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         96s
	  kube-system                 kube-apiserver-pause-064537             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-controller-manager-pause-064537    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-proxy-jkgw5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-pause-064537             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 18s                  kube-proxy       
	  Normal  NodeHasSufficientPID     101s (x7 over 101s)  kubelet          Node pause-064537 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node pause-064537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node pause-064537 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  96s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node pause-064537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node pause-064537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node pause-064537 status is now: NodeHasSufficientPID
	  Normal  NodeReady                95s                  kubelet          Node pause-064537 status is now: NodeReady
	  Normal  RegisteredNode           91s                  node-controller  Node pause-064537 event: Registered Node pause-064537 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node pause-064537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node pause-064537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node pause-064537 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                  node-controller  Node pause-064537 event: Registered Node pause-064537 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.675154] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.063212] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060959] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.192833] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.111668] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.256029] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.044902] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +3.905562] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.065128] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.978760] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.080584] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.807129] systemd-fstab-generator[1372]: Ignoring "noauto" option for root device
	[  +0.117441] kauditd_printk_skb: 21 callbacks suppressed
	[Aug15 01:18] kauditd_printk_skb: 89 callbacks suppressed
	[ +38.665523] systemd-fstab-generator[2274]: Ignoring "noauto" option for root device
	[  +0.139479] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.156231] systemd-fstab-generator[2300]: Ignoring "noauto" option for root device
	[  +0.120252] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
	[  +0.267880] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +1.740552] systemd-fstab-generator[2463]: Ignoring "noauto" option for root device
	[  +4.293576] kauditd_printk_skb: 196 callbacks suppressed
	[Aug15 01:19] systemd-fstab-generator[3243]: Ignoring "noauto" option for root device
	[  +6.995050] kauditd_printk_skb: 46 callbacks suppressed
	[  +9.194430] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	
	
	==> etcd [5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da] <==
	{"level":"info","ts":"2024-08-15T01:18:48.146344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T01:18:48.146407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f received MsgPreVoteResp from 704fd09e1c9dce1f at term 2"}
	{"level":"info","ts":"2024-08-15T01:18:48.146462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T01:18:48.146494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f received MsgVoteResp from 704fd09e1c9dce1f at term 3"}
	{"level":"info","ts":"2024-08-15T01:18:48.146529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became leader at term 3"}
	{"level":"info","ts":"2024-08-15T01:18:48.146561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 704fd09e1c9dce1f elected leader 704fd09e1c9dce1f at term 3"}
	{"level":"info","ts":"2024-08-15T01:18:48.150391Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"704fd09e1c9dce1f","local-member-attributes":"{Name:pause-064537 ClientURLs:[https://192.168.61.243:2379]}","request-path":"/0/members/704fd09e1c9dce1f/attributes","cluster-id":"29cc905037b78c6d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T01:18:48.151517Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:18:48.159511Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:18:48.160334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.243:2379"}
	{"level":"info","ts":"2024-08-15T01:18:48.160723Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:18:48.171586Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:18:48.172444Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T01:18:48.206266Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T01:18:48.206306Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T01:18:50.600600Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-15T01:18:50.600717Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-064537","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.243:2380"],"advertise-client-urls":["https://192.168.61.243:2379"]}
	{"level":"warn","ts":"2024-08-15T01:18:50.600835Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T01:18:50.600953Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T01:18:50.622647Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.243:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T01:18:50.622862Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.243:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T01:18:50.622955Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"704fd09e1c9dce1f","current-leader-member-id":"704fd09e1c9dce1f"}
	{"level":"info","ts":"2024-08-15T01:18:50.627777Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.61.243:2380"}
	{"level":"info","ts":"2024-08-15T01:18:50.627999Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.61.243:2380"}
	{"level":"info","ts":"2024-08-15T01:18:50.628051Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-064537","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.243:2380"],"advertise-client-urls":["https://192.168.61.243:2379"]}
	
	
	==> etcd [cb15264d7125766e9ca5fae54c2d596f8f938c054944ab242a1c7d18381cba44] <==
	{"level":"info","ts":"2024-08-15T01:19:03.068344Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"704fd09e1c9dce1f","initial-advertise-peer-urls":["https://192.168.61.243:2380"],"listen-peer-urls":["https://192.168.61.243:2380"],"advertise-client-urls":["https://192.168.61.243:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.243:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T01:19:03.068380Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T01:19:03.068467Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.243:2380"}
	{"level":"info","ts":"2024-08-15T01:19:03.068486Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.243:2380"}
	{"level":"info","ts":"2024-08-15T01:19:04.088885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-15T01:19:04.088937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-15T01:19:04.088972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f received MsgPreVoteResp from 704fd09e1c9dce1f at term 3"}
	{"level":"info","ts":"2024-08-15T01:19:04.088987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became candidate at term 4"}
	{"level":"info","ts":"2024-08-15T01:19:04.088993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f received MsgVoteResp from 704fd09e1c9dce1f at term 4"}
	{"level":"info","ts":"2024-08-15T01:19:04.089001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became leader at term 4"}
	{"level":"info","ts":"2024-08-15T01:19:04.089008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 704fd09e1c9dce1f elected leader 704fd09e1c9dce1f at term 4"}
	{"level":"info","ts":"2024-08-15T01:19:04.094222Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"704fd09e1c9dce1f","local-member-attributes":"{Name:pause-064537 ClientURLs:[https://192.168.61.243:2379]}","request-path":"/0/members/704fd09e1c9dce1f/attributes","cluster-id":"29cc905037b78c6d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T01:19:04.094231Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:19:04.094430Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:19:04.094627Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T01:19:04.094640Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T01:19:04.095227Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:19:04.095423Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:19:04.096046Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T01:19:04.096398Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.243:2379"}
	{"level":"warn","ts":"2024-08-15T01:19:15.744674Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.557039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-064537\" ","response":"range_response_count:1 size:6601"}
	{"level":"info","ts":"2024-08-15T01:19:15.744757Z","caller":"traceutil/trace.go:171","msg":"trace[1999437051] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-064537; range_end:; response_count:1; response_revision:518; }","duration":"222.688922ms","start":"2024-08-15T01:19:15.522053Z","end":"2024-08-15T01:19:15.744742Z","steps":["trace[1999437051] 'range keys from in-memory index tree'  (duration: 222.260232ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T01:19:15.813803Z","caller":"traceutil/trace.go:171","msg":"trace[247297676] transaction","detail":"{read_only:false; response_revision:519; number_of_response:1; }","duration":"182.061253ms","start":"2024-08-15T01:19:15.631728Z","end":"2024-08-15T01:19:15.813789Z","steps":["trace[247297676] 'process raft request'  (duration: 181.727886ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:19:16.075889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.565243ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T01:19:16.076047Z","caller":"traceutil/trace.go:171","msg":"trace[1950054731] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:519; }","duration":"221.745516ms","start":"2024-08-15T01:19:15.854286Z","end":"2024-08-15T01:19:16.076032Z","steps":["trace[1950054731] 'range keys from in-memory index tree'  (duration: 221.543732ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:19:23 up 2 min,  0 users,  load average: 1.45, 0.49, 0.18
	Linux pause-064537 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657] <==
	W0815 01:18:59.732907       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:18:59.734293       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:18:59.822711       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:18:59.847017       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:18:59.884874       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:18:59.887324       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:18:59.959722       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.049606       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.068441       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.084847       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.099415       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.165852       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.178601       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.270029       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.368078       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.408524       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.449046       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.460544       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.461862       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.497397       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.547525       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.578522       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.652025       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.655737       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:19:00.686100       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c73e18991895ee3a304da7d9f717d443cac3579f116a551edd2fdb5490e59556] <==
	I0815 01:19:05.258423       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 01:19:05.258516       1 policy_source.go:224] refreshing policies
	I0815 01:19:05.290793       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 01:19:05.303385       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 01:19:05.306723       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 01:19:05.306756       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 01:19:05.317492       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 01:19:05.318863       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 01:19:05.324966       1 aggregator.go:171] initial CRD sync complete...
	I0815 01:19:05.324990       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 01:19:05.324997       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 01:19:05.325003       1 cache.go:39] Caches are synced for autoregister controller
	I0815 01:19:05.329754       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 01:19:05.382928       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 01:19:05.383501       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 01:19:05.387247       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 01:19:05.397276       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 01:19:06.191785       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 01:19:06.840743       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 01:19:06.857035       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 01:19:06.901848       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 01:19:06.927609       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 01:19:06.933628       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 01:19:08.840880       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 01:19:08.893833       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2d0e4e057a27a303de0ea9b2cc8b1234376aae9d629d3c4d79e228d540d904c7] <==
	I0815 01:19:08.582573       1 shared_informer.go:320] Caches are synced for persistent volume
	I0815 01:19:08.585905       1 shared_informer.go:320] Caches are synced for TTL
	I0815 01:19:08.587203       1 shared_informer.go:320] Caches are synced for service account
	I0815 01:19:08.588379       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0815 01:19:08.588579       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0815 01:19:08.588773       1 shared_informer.go:320] Caches are synced for PVC protection
	I0815 01:19:08.588800       1 shared_informer.go:320] Caches are synced for PV protection
	I0815 01:19:08.588842       1 shared_informer.go:320] Caches are synced for stateful set
	I0815 01:19:08.594509       1 shared_informer.go:320] Caches are synced for endpoint
	I0815 01:19:08.597424       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0815 01:19:08.601554       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0815 01:19:08.608644       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="86.60911ms"
	I0815 01:19:08.609472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="90.663µs"
	I0815 01:19:08.616196       1 shared_informer.go:320] Caches are synced for daemon sets
	I0815 01:19:08.642931       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0815 01:19:08.748204       1 shared_informer.go:320] Caches are synced for deployment
	I0815 01:19:08.759796       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 01:19:08.763200       1 shared_informer.go:320] Caches are synced for attach detach
	I0815 01:19:08.794803       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 01:19:08.838208       1 shared_informer.go:320] Caches are synced for disruption
	I0815 01:19:09.237989       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 01:19:09.238029       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0815 01:19:09.239808       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 01:19:11.046087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="22.832616ms"
	I0815 01:19:11.046585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="109.946µs"
	
	
	==> kube-controller-manager [e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9] <==
	I0815 01:18:48.137410       1 serving.go:386] Generated self-signed cert in-memory
	I0815 01:18:48.556066       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 01:18:48.556101       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:18:48.559864       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 01:18:48.560235       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 01:18:48.560263       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 01:18:48.560291       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [618741a2dad66f68e1efc661def71bc71bf65f2f057bc452f83e72169736389a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:17:54.301643       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 01:17:54.312152       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.243"]
	E0815 01:17:54.312241       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 01:17:54.349845       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 01:17:54.349885       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 01:17:54.349910       1 server_linux.go:169] "Using iptables Proxier"
	I0815 01:17:54.352153       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 01:17:54.352484       1 server.go:483] "Version info" version="v1.31.0"
	I0815 01:17:54.352568       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:17:54.353826       1 config.go:197] "Starting service config controller"
	I0815 01:17:54.353877       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 01:17:54.353909       1 config.go:104] "Starting endpoint slice config controller"
	I0815 01:17:54.353924       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 01:17:54.354486       1 config.go:326] "Starting node config controller"
	I0815 01:17:54.354526       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 01:17:54.454734       1 shared_informer.go:320] Caches are synced for node config
	I0815 01:17:54.454828       1 shared_informer.go:320] Caches are synced for service config
	I0815 01:17:54.454863       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [83469d0f301fac5dfb4c6cb368c0c3bd49b17dc9accd7de423fbfcd8f20d21de] <==
	 >
	E0815 01:18:48.569207       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:18:49.945563       1 server.go:666] "Failed to retrieve node info" err="nodes \"pause-064537\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]"
	E0815 01:19:01.764758       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-064537\": dial tcp 192.168.61.243:8443: connect: connection refused - error from a previous attempt: unexpected EOF"
	I0815 01:19:05.343138       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.243"]
	E0815 01:19:05.343300       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 01:19:05.410938       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 01:19:05.411020       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 01:19:05.411062       1 server_linux.go:169] "Using iptables Proxier"
	I0815 01:19:05.413480       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 01:19:05.413783       1 server.go:483] "Version info" version="v1.31.0"
	I0815 01:19:05.413950       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:19:05.415183       1 config.go:197] "Starting service config controller"
	I0815 01:19:05.415274       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 01:19:05.415322       1 config.go:104] "Starting endpoint slice config controller"
	I0815 01:19:05.415340       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 01:19:05.415814       1 config.go:326] "Starting node config controller"
	I0815 01:19:05.415892       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 01:19:05.515349       1 shared_informer.go:320] Caches are synced for service config
	I0815 01:19:05.515469       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 01:19:05.516216       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d] <==
	I0815 01:18:48.532316       1 serving.go:386] Generated self-signed cert in-memory
	W0815 01:18:49.879413       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 01:18:49.879514       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 01:18:49.879577       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 01:18:49.879604       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 01:18:49.984895       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 01:18:49.984986       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0815 01:18:49.985065       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0815 01:18:49.987575       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0815 01:18:49.991958       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0815 01:18:49.991323       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 01:18:49.991307       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 01:18:49.993287       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0815 01:18:49.994499       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [804f20a0bc1b3951593018f5b971220316469b8a9b84793426ad9e61a4629056] <==
	I0815 01:19:03.603500       1 serving.go:386] Generated self-signed cert in-memory
	W0815 01:19:05.232947       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 01:19:05.233056       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 01:19:05.233087       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 01:19:05.233151       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 01:19:05.316720       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 01:19:05.316756       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:19:05.328823       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 01:19:05.332267       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 01:19:05.332312       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 01:19:05.332339       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 01:19:05.433024       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.335046    3250 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb3ed9bf63f0c0aa95b78896f2b0f6a3-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-064537\" (UID: \"eb3ed9bf63f0c0aa95b78896f2b0f6a3\") " pod="kube-system/kube-controller-manager-pause-064537"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.335091    3250 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/03e7be76a9c4e873c0614c1101592575-etcd-certs\") pod \"etcd-pause-064537\" (UID: \"03e7be76a9c4e873c0614c1101592575\") " pod="kube-system/etcd-pause-064537"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: E0815 01:19:02.335078    3250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-064537?timeout=10s\": dial tcp 192.168.61.243:8443: connect: connection refused" interval="400ms"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.493261    3250 kubelet_node_status.go:72] "Attempting to register node" node="pause-064537"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: E0815 01:19:02.494223    3250 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.243:8443: connect: connection refused" node="pause-064537"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.610715    3250 scope.go:117] "RemoveContainer" containerID="2e0a1e80f7a7f2b315e38f635507f68b44020ef3aed81b518621cfad5d5cb657"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.613470    3250 scope.go:117] "RemoveContainer" containerID="e3eb9eded28db571fc7de646cad0247bc93b56ed0f165298e69e938e4ee746c9"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.615445    3250 scope.go:117] "RemoveContainer" containerID="65bdb78cfa5fb96fc834e6acf488011d0f59b697d61f74d19fb6559454bc6e5d"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.616301    3250 scope.go:117] "RemoveContainer" containerID="5c6b7316f2555aacb0ae3c52b236590cfca12ac1c2ccf0b1bec8f73a3819d6da"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: E0815 01:19:02.737976    3250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-064537?timeout=10s\": dial tcp 192.168.61.243:8443: connect: connection refused" interval="800ms"
	Aug 15 01:19:02 pause-064537 kubelet[3250]: I0815 01:19:02.896270    3250 kubelet_node_status.go:72] "Attempting to register node" node="pause-064537"
	Aug 15 01:19:05 pause-064537 kubelet[3250]: I0815 01:19:05.341264    3250 kubelet_node_status.go:111] "Node was previously registered" node="pause-064537"
	Aug 15 01:19:05 pause-064537 kubelet[3250]: I0815 01:19:05.341448    3250 kubelet_node_status.go:75] "Successfully registered node" node="pause-064537"
	Aug 15 01:19:05 pause-064537 kubelet[3250]: I0815 01:19:05.341480    3250 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 15 01:19:05 pause-064537 kubelet[3250]: I0815 01:19:05.342602    3250 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 15 01:19:06 pause-064537 kubelet[3250]: I0815 01:19:06.117964    3250 apiserver.go:52] "Watching apiserver"
	Aug 15 01:19:06 pause-064537 kubelet[3250]: I0815 01:19:06.124246    3250 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 15 01:19:06 pause-064537 kubelet[3250]: I0815 01:19:06.127882    3250 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e749136f-57bd-41a0-aa1c-1d12c05445a4-lib-modules\") pod \"kube-proxy-jkgw5\" (UID: \"e749136f-57bd-41a0-aa1c-1d12c05445a4\") " pod="kube-system/kube-proxy-jkgw5"
	Aug 15 01:19:06 pause-064537 kubelet[3250]: I0815 01:19:06.128039    3250 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e749136f-57bd-41a0-aa1c-1d12c05445a4-xtables-lock\") pod \"kube-proxy-jkgw5\" (UID: \"e749136f-57bd-41a0-aa1c-1d12c05445a4\") " pod="kube-system/kube-proxy-jkgw5"
	Aug 15 01:19:06 pause-064537 kubelet[3250]: I0815 01:19:06.423208    3250 scope.go:117] "RemoveContainer" containerID="b32a9de8341b7738cf1c40fcaade075cf782ffb20d2cdc83abf2536796d3e8f2"
	Aug 15 01:19:10 pause-064537 kubelet[3250]: I0815 01:19:10.999629    3250 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 15 01:19:12 pause-064537 kubelet[3250]: E0815 01:19:12.192709    3250 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684752192055489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:19:12 pause-064537 kubelet[3250]: E0815 01:19:12.192731    3250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684752192055489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:19:22 pause-064537 kubelet[3250]: E0815 01:19:22.194193    3250 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684762193862619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:19:22 pause-064537 kubelet[3250]: E0815 01:19:22.194251    3250 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723684762193862619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-064537 -n pause-064537
helpers_test.go:261: (dbg) Run:  kubectl --context pause-064537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (56.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-884893 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-884893 --alsologtostderr -v=3: exit status 82 (2m0.528783905s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-884893"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:21:18.638531   65093 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:21:18.638751   65093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:21:18.638759   65093 out.go:304] Setting ErrFile to fd 2...
	I0815 01:21:18.638763   65093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:21:18.638933   65093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:21:18.639140   65093 out.go:298] Setting JSON to false
	I0815 01:21:18.639213   65093 mustload.go:65] Loading cluster: no-preload-884893
	I0815 01:21:18.639534   65093 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:21:18.639595   65093 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/config.json ...
	I0815 01:21:18.639756   65093 mustload.go:65] Loading cluster: no-preload-884893
	I0815 01:21:18.639851   65093 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:21:18.639874   65093 stop.go:39] StopHost: no-preload-884893
	I0815 01:21:18.640248   65093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:21:18.640290   65093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:21:18.658964   65093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43859
	I0815 01:21:18.659489   65093 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:21:18.660099   65093 main.go:141] libmachine: Using API Version  1
	I0815 01:21:18.660131   65093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:21:18.660525   65093 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:21:18.662934   65093 out.go:177] * Stopping node "no-preload-884893"  ...
	I0815 01:21:18.664427   65093 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 01:21:18.664475   65093 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:21:18.664747   65093 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 01:21:18.664785   65093 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:21:18.667899   65093 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:21:18.668335   65093 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:19:40 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:21:18.668366   65093 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:21:18.668508   65093 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:21:18.668692   65093 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:21:18.668857   65093 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:21:18.668994   65093 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:21:18.768872   65093 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 01:21:18.832937   65093 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 01:21:18.895440   65093 main.go:141] libmachine: Stopping "no-preload-884893"...
	I0815 01:21:18.895497   65093 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:21:18.897116   65093 main.go:141] libmachine: (no-preload-884893) Calling .Stop
	I0815 01:21:18.901259   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 0/120
	I0815 01:21:19.902524   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 1/120
	I0815 01:21:20.904126   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 2/120
	I0815 01:21:21.905784   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 3/120
	I0815 01:21:22.906983   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 4/120
	I0815 01:21:23.908257   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 5/120
	I0815 01:21:24.909872   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 6/120
	I0815 01:21:25.911384   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 7/120
	I0815 01:21:26.912864   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 8/120
	I0815 01:21:27.914316   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 9/120
	I0815 01:21:28.916428   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 10/120
	I0815 01:21:29.918057   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 11/120
	I0815 01:21:30.919570   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 12/120
	I0815 01:21:31.920935   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 13/120
	I0815 01:21:32.922099   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 14/120
	I0815 01:21:33.924092   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 15/120
	I0815 01:21:34.925380   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 16/120
	I0815 01:21:35.927160   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 17/120
	I0815 01:21:36.928481   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 18/120
	I0815 01:21:37.930341   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 19/120
	I0815 01:21:38.932426   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 20/120
	I0815 01:21:39.933883   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 21/120
	I0815 01:21:40.935107   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 22/120
	I0815 01:21:41.936363   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 23/120
	I0815 01:21:42.937805   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 24/120
	I0815 01:21:43.939673   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 25/120
	I0815 01:21:44.942060   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 26/120
	I0815 01:21:45.943678   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 27/120
	I0815 01:21:46.945004   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 28/120
	I0815 01:21:47.947366   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 29/120
	I0815 01:21:48.949200   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 30/120
	I0815 01:21:49.951231   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 31/120
	I0815 01:21:50.952978   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 32/120
	I0815 01:21:51.955050   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 33/120
	I0815 01:21:52.956835   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 34/120
	I0815 01:21:53.958917   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 35/120
	I0815 01:21:54.961153   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 36/120
	I0815 01:21:55.963689   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 37/120
	I0815 01:21:56.984114   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 38/120
	I0815 01:21:57.986069   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 39/120
	I0815 01:21:58.988538   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 40/120
	I0815 01:21:59.989973   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 41/120
	I0815 01:22:00.991339   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 42/120
	I0815 01:22:01.992620   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 43/120
	I0815 01:22:02.994093   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 44/120
	I0815 01:22:03.995887   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 45/120
	I0815 01:22:04.997297   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 46/120
	I0815 01:22:05.999106   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 47/120
	I0815 01:22:07.000637   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 48/120
	I0815 01:22:08.002523   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 49/120
	I0815 01:22:09.003857   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 50/120
	I0815 01:22:10.005412   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 51/120
	I0815 01:22:11.006946   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 52/120
	I0815 01:22:12.008752   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 53/120
	I0815 01:22:13.010157   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 54/120
	I0815 01:22:14.012104   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 55/120
	I0815 01:22:15.013567   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 56/120
	I0815 01:22:16.015079   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 57/120
	I0815 01:22:17.017056   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 58/120
	I0815 01:22:18.018878   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 59/120
	I0815 01:22:19.020658   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 60/120
	I0815 01:22:20.022308   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 61/120
	I0815 01:22:21.023691   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 62/120
	I0815 01:22:22.025344   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 63/120
	I0815 01:22:23.027285   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 64/120
	I0815 01:22:24.029068   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 65/120
	I0815 01:22:25.031176   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 66/120
	I0815 01:22:26.032493   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 67/120
	I0815 01:22:27.034390   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 68/120
	I0815 01:22:28.035989   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 69/120
	I0815 01:22:29.038349   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 70/120
	I0815 01:22:30.039644   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 71/120
	I0815 01:22:31.040896   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 72/120
	I0815 01:22:32.042333   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 73/120
	I0815 01:22:33.043592   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 74/120
	I0815 01:22:34.045220   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 75/120
	I0815 01:22:35.047173   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 76/120
	I0815 01:22:36.048970   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 77/120
	I0815 01:22:37.051119   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 78/120
	I0815 01:22:38.052907   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 79/120
	I0815 01:22:39.055189   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 80/120
	I0815 01:22:40.056841   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 81/120
	I0815 01:22:41.059668   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 82/120
	I0815 01:22:42.061094   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 83/120
	I0815 01:22:43.063159   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 84/120
	I0815 01:22:44.065028   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 85/120
	I0815 01:22:45.066341   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 86/120
	I0815 01:22:46.067933   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 87/120
	I0815 01:22:47.069392   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 88/120
	I0815 01:22:48.070809   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 89/120
	I0815 01:22:49.072824   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 90/120
	I0815 01:22:50.074205   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 91/120
	I0815 01:22:51.075570   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 92/120
	I0815 01:22:52.077020   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 93/120
	I0815 01:22:53.078335   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 94/120
	I0815 01:22:54.080305   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 95/120
	I0815 01:22:55.081827   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 96/120
	I0815 01:22:56.083512   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 97/120
	I0815 01:22:57.084860   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 98/120
	I0815 01:22:58.087063   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 99/120
	I0815 01:22:59.089263   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 100/120
	I0815 01:23:00.091474   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 101/120
	I0815 01:23:01.092798   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 102/120
	I0815 01:23:02.095270   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 103/120
	I0815 01:23:03.097232   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 104/120
	I0815 01:23:04.099187   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 105/120
	I0815 01:23:05.100730   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 106/120
	I0815 01:23:06.102138   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 107/120
	I0815 01:23:07.103405   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 108/120
	I0815 01:23:08.104598   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 109/120
	I0815 01:23:09.106939   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 110/120
	I0815 01:23:10.108485   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 111/120
	I0815 01:23:11.109750   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 112/120
	I0815 01:23:12.110921   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 113/120
	I0815 01:23:13.112271   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 114/120
	I0815 01:23:14.114069   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 115/120
	I0815 01:23:15.115559   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 116/120
	I0815 01:23:16.117053   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 117/120
	I0815 01:23:17.118621   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 118/120
	I0815 01:23:18.119949   65093 main.go:141] libmachine: (no-preload-884893) Waiting for machine to stop 119/120
	I0815 01:23:19.120820   65093 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 01:23:19.120887   65093 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0815 01:23:19.122647   65093 out.go:177] 
	W0815 01:23:19.123818   65093 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0815 01:23:19.123839   65093 out.go:239] * 
	* 
	W0815 01:23:19.126429   65093 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:23:19.127626   65093 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-884893 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-884893 -n no-preload-884893
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-884893 -n no-preload-884893: exit status 3 (18.663646361s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:23:37.792975   66180 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.166:22: connect: no route to host
	E0815 01:23:37.793003   66180 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.166:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-884893" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-190398 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-190398 --alsologtostderr -v=3: exit status 82 (2m0.489723893s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-190398"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:22:04.229092   65675 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:22:04.229224   65675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:22:04.229231   65675 out.go:304] Setting ErrFile to fd 2...
	I0815 01:22:04.229238   65675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:22:04.229465   65675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:22:04.229718   65675 out.go:298] Setting JSON to false
	I0815 01:22:04.229823   65675 mustload.go:65] Loading cluster: embed-certs-190398
	I0815 01:22:04.230173   65675 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:22:04.230257   65675 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/config.json ...
	I0815 01:22:04.230468   65675 mustload.go:65] Loading cluster: embed-certs-190398
	I0815 01:22:04.230619   65675 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:22:04.230655   65675 stop.go:39] StopHost: embed-certs-190398
	I0815 01:22:04.231083   65675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:22:04.231147   65675 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:22:04.245659   65675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43263
	I0815 01:22:04.246143   65675 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:22:04.246814   65675 main.go:141] libmachine: Using API Version  1
	I0815 01:22:04.246851   65675 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:22:04.247162   65675 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:22:04.249441   65675 out.go:177] * Stopping node "embed-certs-190398"  ...
	I0815 01:22:04.250681   65675 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 01:22:04.250716   65675 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:22:04.250970   65675 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 01:22:04.251006   65675 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:22:04.253831   65675 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:22:04.254267   65675 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:20:42 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:22:04.254290   65675 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:22:04.254471   65675 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:22:04.254633   65675 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:22:04.254762   65675 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:22:04.254897   65675 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:22:04.349309   65675 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 01:22:04.410146   65675 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 01:22:04.474439   65675 main.go:141] libmachine: Stopping "embed-certs-190398"...
	I0815 01:22:04.474483   65675 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:22:04.476112   65675 main.go:141] libmachine: (embed-certs-190398) Calling .Stop
	I0815 01:22:04.479705   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 0/120
	I0815 01:22:05.481216   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 1/120
	I0815 01:22:06.482605   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 2/120
	I0815 01:22:07.483946   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 3/120
	I0815 01:22:08.485180   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 4/120
	I0815 01:22:09.486836   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 5/120
	I0815 01:22:10.488440   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 6/120
	I0815 01:22:11.489787   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 7/120
	I0815 01:22:12.491143   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 8/120
	I0815 01:22:13.492483   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 9/120
	I0815 01:22:14.493990   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 10/120
	I0815 01:22:15.495437   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 11/120
	I0815 01:22:16.496694   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 12/120
	I0815 01:22:17.497961   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 13/120
	I0815 01:22:18.500060   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 14/120
	I0815 01:22:19.501508   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 15/120
	I0815 01:22:20.503457   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 16/120
	I0815 01:22:21.505196   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 17/120
	I0815 01:22:22.507238   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 18/120
	I0815 01:22:23.508718   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 19/120
	I0815 01:22:24.510729   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 20/120
	I0815 01:22:25.512161   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 21/120
	I0815 01:22:26.513585   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 22/120
	I0815 01:22:27.514799   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 23/120
	I0815 01:22:28.516717   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 24/120
	I0815 01:22:29.518571   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 25/120
	I0815 01:22:30.520040   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 26/120
	I0815 01:22:31.521294   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 27/120
	I0815 01:22:32.523040   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 28/120
	I0815 01:22:33.524484   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 29/120
	I0815 01:22:34.526718   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 30/120
	I0815 01:22:35.528633   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 31/120
	I0815 01:22:36.529794   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 32/120
	I0815 01:22:37.531318   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 33/120
	I0815 01:22:38.532334   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 34/120
	I0815 01:22:39.534147   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 35/120
	I0815 01:22:40.536414   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 36/120
	I0815 01:22:41.538134   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 37/120
	I0815 01:22:42.539792   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 38/120
	I0815 01:22:43.541336   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 39/120
	I0815 01:22:44.543139   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 40/120
	I0815 01:22:45.544625   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 41/120
	I0815 01:22:46.546079   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 42/120
	I0815 01:22:47.547660   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 43/120
	I0815 01:22:48.548969   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 44/120
	I0815 01:22:49.550594   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 45/120
	I0815 01:22:50.551810   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 46/120
	I0815 01:22:51.553755   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 47/120
	I0815 01:22:52.555707   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 48/120
	I0815 01:22:53.557114   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 49/120
	I0815 01:22:54.559175   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 50/120
	I0815 01:22:55.560562   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 51/120
	I0815 01:22:56.561931   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 52/120
	I0815 01:22:57.563390   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 53/120
	I0815 01:22:58.564791   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 54/120
	I0815 01:22:59.566691   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 55/120
	I0815 01:23:00.568045   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 56/120
	I0815 01:23:01.569462   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 57/120
	I0815 01:23:02.570775   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 58/120
	I0815 01:23:03.572559   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 59/120
	I0815 01:23:04.574907   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 60/120
	I0815 01:23:05.576252   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 61/120
	I0815 01:23:06.577577   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 62/120
	I0815 01:23:07.578877   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 63/120
	I0815 01:23:08.580113   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 64/120
	I0815 01:23:09.581849   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 65/120
	I0815 01:23:10.583159   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 66/120
	I0815 01:23:11.584484   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 67/120
	I0815 01:23:12.585760   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 68/120
	I0815 01:23:13.587122   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 69/120
	I0815 01:23:14.589124   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 70/120
	I0815 01:23:15.590473   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 71/120
	I0815 01:23:16.592116   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 72/120
	I0815 01:23:17.594189   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 73/120
	I0815 01:23:18.596248   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 74/120
	I0815 01:23:19.597745   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 75/120
	I0815 01:23:20.599060   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 76/120
	I0815 01:23:21.600332   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 77/120
	I0815 01:23:22.601509   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 78/120
	I0815 01:23:23.603505   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 79/120
	I0815 01:23:24.605943   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 80/120
	I0815 01:23:25.607290   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 81/120
	I0815 01:23:26.608714   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 82/120
	I0815 01:23:27.610177   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 83/120
	I0815 01:23:28.611603   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 84/120
	I0815 01:23:29.613366   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 85/120
	I0815 01:23:30.615268   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 86/120
	I0815 01:23:31.616595   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 87/120
	I0815 01:23:32.617957   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 88/120
	I0815 01:23:33.620513   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 89/120
	I0815 01:23:34.622562   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 90/120
	I0815 01:23:35.623933   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 91/120
	I0815 01:23:36.625655   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 92/120
	I0815 01:23:37.627163   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 93/120
	I0815 01:23:38.628473   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 94/120
	I0815 01:23:39.630348   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 95/120
	I0815 01:23:40.632120   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 96/120
	I0815 01:23:41.633502   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 97/120
	I0815 01:23:42.634953   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 98/120
	I0815 01:23:43.636307   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 99/120
	I0815 01:23:44.638436   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 100/120
	I0815 01:23:45.640320   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 101/120
	I0815 01:23:46.641648   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 102/120
	I0815 01:23:47.642843   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 103/120
	I0815 01:23:48.644198   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 104/120
	I0815 01:23:49.646177   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 105/120
	I0815 01:23:50.647626   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 106/120
	I0815 01:23:51.649073   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 107/120
	I0815 01:23:52.650516   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 108/120
	I0815 01:23:53.651936   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 109/120
	I0815 01:23:54.654176   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 110/120
	I0815 01:23:55.655600   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 111/120
	I0815 01:23:56.657026   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 112/120
	I0815 01:23:57.658411   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 113/120
	I0815 01:23:58.659733   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 114/120
	I0815 01:23:59.661716   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 115/120
	I0815 01:24:00.664004   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 116/120
	I0815 01:24:01.665712   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 117/120
	I0815 01:24:02.667086   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 118/120
	I0815 01:24:03.668594   65675 main.go:141] libmachine: (embed-certs-190398) Waiting for machine to stop 119/120
	I0815 01:24:04.669179   65675 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 01:24:04.669226   65675 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0815 01:24:04.670881   65675 out.go:177] 
	W0815 01:24:04.672039   65675 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0815 01:24:04.672063   65675 out.go:239] * 
	* 
	W0815 01:24:04.674930   65675 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:24:04.676055   65675 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-190398 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190398 -n embed-certs-190398
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190398 -n embed-certs-190398: exit status 3 (18.426483939s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:24:23.104939   66582 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host
	E0815 01:24:23.104961   66582 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-190398" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-390782 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-390782 create -f testdata/busybox.yaml: exit status 1 (45.567729ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-390782" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-390782 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 6 (221.722153ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:22:40.884128   65950 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-390782" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-390782" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 6 (224.677722ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:22:41.108071   65979 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-390782" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-390782" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-390782 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-390782 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m45.260730767s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-390782 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-390782 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-390782 describe deploy/metrics-server -n kube-system: exit status 1 (43.937369ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-390782" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-390782 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 6 (210.750963ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:24:26.626751   66787 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-390782" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-390782" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-018537 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-018537 --alsologtostderr -v=3: exit status 82 (2m0.506125659s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-018537"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:23:33.820826   66344 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:23:33.820924   66344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:23:33.820932   66344 out.go:304] Setting ErrFile to fd 2...
	I0815 01:23:33.820936   66344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:23:33.821108   66344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:23:33.821323   66344 out.go:298] Setting JSON to false
	I0815 01:23:33.821401   66344 mustload.go:65] Loading cluster: default-k8s-diff-port-018537
	I0815 01:23:33.821733   66344 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:23:33.821799   66344 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:23:33.821960   66344 mustload.go:65] Loading cluster: default-k8s-diff-port-018537
	I0815 01:23:33.822056   66344 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:23:33.822089   66344 stop.go:39] StopHost: default-k8s-diff-port-018537
	I0815 01:23:33.822464   66344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:23:33.822507   66344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:23:33.837087   66344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45971
	I0815 01:23:33.837570   66344 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:23:33.838168   66344 main.go:141] libmachine: Using API Version  1
	I0815 01:23:33.838187   66344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:23:33.838591   66344 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:23:33.840987   66344 out.go:177] * Stopping node "default-k8s-diff-port-018537"  ...
	I0815 01:23:33.842291   66344 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 01:23:33.842320   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:23:33.842572   66344 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 01:23:33.842613   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:23:33.845326   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:23:33.845706   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:22:11 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:23:33.845743   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:23:33.845841   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:23:33.846016   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:23:33.846176   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:23:33.846323   66344 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:23:33.956028   66344 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 01:23:34.015758   66344 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 01:23:34.080794   66344 main.go:141] libmachine: Stopping "default-k8s-diff-port-018537"...
	I0815 01:23:34.080837   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:23:34.082557   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Stop
	I0815 01:23:34.086224   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 0/120
	I0815 01:23:35.087731   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 1/120
	I0815 01:23:36.089183   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 2/120
	I0815 01:23:37.091162   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 3/120
	I0815 01:23:38.092465   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 4/120
	I0815 01:23:39.094074   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 5/120
	I0815 01:23:40.095248   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 6/120
	I0815 01:23:41.096715   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 7/120
	I0815 01:23:42.097980   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 8/120
	I0815 01:23:43.099385   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 9/120
	I0815 01:23:44.101810   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 10/120
	I0815 01:23:45.103157   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 11/120
	I0815 01:23:46.104609   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 12/120
	I0815 01:23:47.106178   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 13/120
	I0815 01:23:48.107481   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 14/120
	I0815 01:23:49.109688   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 15/120
	I0815 01:23:50.111054   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 16/120
	I0815 01:23:51.112465   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 17/120
	I0815 01:23:52.113902   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 18/120
	I0815 01:23:53.115534   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 19/120
	I0815 01:23:54.117819   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 20/120
	I0815 01:23:55.119085   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 21/120
	I0815 01:23:56.120427   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 22/120
	I0815 01:23:57.121787   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 23/120
	I0815 01:23:58.123084   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 24/120
	I0815 01:23:59.125348   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 25/120
	I0815 01:24:00.126629   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 26/120
	I0815 01:24:01.127908   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 27/120
	I0815 01:24:02.129337   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 28/120
	I0815 01:24:03.130682   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 29/120
	I0815 01:24:04.132957   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 30/120
	I0815 01:24:05.134236   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 31/120
	I0815 01:24:06.135513   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 32/120
	I0815 01:24:07.136876   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 33/120
	I0815 01:24:08.138456   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 34/120
	I0815 01:24:09.140459   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 35/120
	I0815 01:24:10.141828   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 36/120
	I0815 01:24:11.143243   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 37/120
	I0815 01:24:12.144876   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 38/120
	I0815 01:24:13.146278   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 39/120
	I0815 01:24:14.148674   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 40/120
	I0815 01:24:15.150062   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 41/120
	I0815 01:24:16.151510   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 42/120
	I0815 01:24:17.152818   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 43/120
	I0815 01:24:18.154128   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 44/120
	I0815 01:24:19.156259   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 45/120
	I0815 01:24:20.157429   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 46/120
	I0815 01:24:21.158883   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 47/120
	I0815 01:24:22.160539   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 48/120
	I0815 01:24:23.162026   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 49/120
	I0815 01:24:24.164221   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 50/120
	I0815 01:24:25.165613   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 51/120
	I0815 01:24:26.166850   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 52/120
	I0815 01:24:27.168107   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 53/120
	I0815 01:24:28.169595   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 54/120
	I0815 01:24:29.171299   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 55/120
	I0815 01:24:30.172789   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 56/120
	I0815 01:24:31.174368   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 57/120
	I0815 01:24:32.176243   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 58/120
	I0815 01:24:33.177561   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 59/120
	I0815 01:24:34.179776   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 60/120
	I0815 01:24:35.181154   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 61/120
	I0815 01:24:36.182561   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 62/120
	I0815 01:24:37.183926   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 63/120
	I0815 01:24:38.185381   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 64/120
	I0815 01:24:39.187271   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 65/120
	I0815 01:24:40.188502   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 66/120
	I0815 01:24:41.190194   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 67/120
	I0815 01:24:42.191588   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 68/120
	I0815 01:24:43.193019   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 69/120
	I0815 01:24:44.195485   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 70/120
	I0815 01:24:45.196875   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 71/120
	I0815 01:24:46.198237   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 72/120
	I0815 01:24:47.199593   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 73/120
	I0815 01:24:48.201008   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 74/120
	I0815 01:24:49.203416   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 75/120
	I0815 01:24:50.204754   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 76/120
	I0815 01:24:51.205931   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 77/120
	I0815 01:24:52.207745   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 78/120
	I0815 01:24:53.209116   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 79/120
	I0815 01:24:54.211092   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 80/120
	I0815 01:24:55.212439   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 81/120
	I0815 01:24:56.213795   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 82/120
	I0815 01:24:57.215171   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 83/120
	I0815 01:24:58.216490   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 84/120
	I0815 01:24:59.218575   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 85/120
	I0815 01:25:00.219925   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 86/120
	I0815 01:25:01.221292   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 87/120
	I0815 01:25:02.222567   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 88/120
	I0815 01:25:03.223997   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 89/120
	I0815 01:25:04.226147   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 90/120
	I0815 01:25:05.227546   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 91/120
	I0815 01:25:06.229017   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 92/120
	I0815 01:25:07.230552   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 93/120
	I0815 01:25:08.232026   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 94/120
	I0815 01:25:09.234250   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 95/120
	I0815 01:25:10.235642   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 96/120
	I0815 01:25:11.236943   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 97/120
	I0815 01:25:12.238728   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 98/120
	I0815 01:25:13.240149   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 99/120
	I0815 01:25:14.242518   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 100/120
	I0815 01:25:15.244082   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 101/120
	I0815 01:25:16.245487   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 102/120
	I0815 01:25:17.246988   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 103/120
	I0815 01:25:18.248308   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 104/120
	I0815 01:25:19.250400   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 105/120
	I0815 01:25:20.252029   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 106/120
	I0815 01:25:21.253476   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 107/120
	I0815 01:25:22.254830   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 108/120
	I0815 01:25:23.256130   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 109/120
	I0815 01:25:24.258222   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 110/120
	I0815 01:25:25.259825   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 111/120
	I0815 01:25:26.261248   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 112/120
	I0815 01:25:27.262635   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 113/120
	I0815 01:25:28.264014   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 114/120
	I0815 01:25:29.266391   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 115/120
	I0815 01:25:30.267786   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 116/120
	I0815 01:25:31.269439   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 117/120
	I0815 01:25:32.270954   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 118/120
	I0815 01:25:33.272231   66344 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for machine to stop 119/120
	I0815 01:25:34.273607   66344 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 01:25:34.273670   66344 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0815 01:25:34.275573   66344 out.go:177] 
	W0815 01:25:34.276746   66344 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0815 01:25:34.276761   66344 out.go:239] * 
	* 
	W0815 01:25:34.279342   66344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:25:34.280674   66344 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-018537 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537: exit status 3 (18.422930601s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:25:52.704937   67246 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.223:22: connect: no route to host
	E0815 01:25:52.704966   67246 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.223:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-018537" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-884893 -n no-preload-884893
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-884893 -n no-preload-884893: exit status 3 (3.167888959s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:23:40.960974   66381 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.166:22: connect: no route to host
	E0815 01:23:40.960995   66381 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.166:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-884893 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0815 01:23:45.640571   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-884893 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152343096s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.166:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-884893 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-884893 -n no-preload-884893
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-884893 -n no-preload-884893: exit status 3 (3.06343967s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:23:50.177002   66461 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.166:22: connect: no route to host
	E0815 01:23:50.177021   66461 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.166:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-884893" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190398 -n embed-certs-190398
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190398 -n embed-certs-190398: exit status 3 (3.16795611s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:24:26.273069   66728 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host
	E0815 01:24:26.273092   66728 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-190398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-190398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152315596s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-190398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190398 -n embed-certs-190398
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190398 -n embed-certs-190398: exit status 3 (3.063529079s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:24:35.489009   66970 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host
	E0815 01:24:35.489029   66970 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.151:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-190398" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (750.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-390782 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-390782 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m27.148017747s)

                                                
                                                
-- stdout --
	* [old-k8s-version-390782] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-390782" primary control-plane node in "old-k8s-version-390782" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-390782" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:24:30.130630   66919 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:24:30.130738   66919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:24:30.130745   66919 out.go:304] Setting ErrFile to fd 2...
	I0815 01:24:30.130750   66919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:24:30.130955   66919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:24:30.131446   66919 out.go:298] Setting JSON to false
	I0815 01:24:30.132296   66919 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7615,"bootTime":1723677455,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:24:30.132357   66919 start.go:139] virtualization: kvm guest
	I0815 01:24:30.134350   66919 out.go:177] * [old-k8s-version-390782] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:24:30.135651   66919 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:24:30.135647   66919 notify.go:220] Checking for updates...
	I0815 01:24:30.137979   66919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:24:30.139160   66919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:24:30.140256   66919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:24:30.141343   66919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:24:30.142388   66919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:24:30.143757   66919 config.go:182] Loaded profile config "old-k8s-version-390782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:24:30.144146   66919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:24:30.144195   66919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:24:30.159397   66919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0815 01:24:30.159799   66919 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:24:30.160284   66919 main.go:141] libmachine: Using API Version  1
	I0815 01:24:30.160304   66919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:24:30.160625   66919 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:24:30.160823   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:24:30.162377   66919 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 01:24:30.163384   66919 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:24:30.163779   66919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:24:30.163818   66919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:24:30.178643   66919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0815 01:24:30.179054   66919 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:24:30.179661   66919 main.go:141] libmachine: Using API Version  1
	I0815 01:24:30.179698   66919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:24:30.180041   66919 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:24:30.180239   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:24:30.214400   66919 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 01:24:30.215806   66919 start.go:297] selected driver: kvm2
	I0815 01:24:30.215832   66919 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:24:30.215952   66919 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:24:30.216695   66919 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:24:30.216796   66919 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:24:30.231090   66919 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:24:30.231438   66919 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:24:30.231495   66919 cni.go:84] Creating CNI manager for ""
	I0815 01:24:30.231512   66919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:24:30.231550   66919 start.go:340] cluster config:
	{Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:24:30.231652   66919 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:24:30.233261   66919 out.go:177] * Starting "old-k8s-version-390782" primary control-plane node in "old-k8s-version-390782" cluster
	I0815 01:24:30.234232   66919 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 01:24:30.234266   66919 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:24:30.234275   66919 cache.go:56] Caching tarball of preloaded images
	I0815 01:24:30.234385   66919 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:24:30.234400   66919 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 01:24:30.234492   66919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/config.json ...
	I0815 01:24:30.234680   66919 start.go:360] acquireMachinesLock for old-k8s-version-390782: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:28:27.773197   66919 start.go:364] duration metric: took 3m57.538488178s to acquireMachinesLock for "old-k8s-version-390782"
	I0815 01:28:27.773249   66919 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:27.773269   66919 fix.go:54] fixHost starting: 
	I0815 01:28:27.773597   66919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:27.773632   66919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:27.788757   66919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0815 01:28:27.789155   66919 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:27.789612   66919 main.go:141] libmachine: Using API Version  1
	I0815 01:28:27.789645   66919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:27.789952   66919 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:27.790122   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:27.790265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetState
	I0815 01:28:27.791742   66919 fix.go:112] recreateIfNeeded on old-k8s-version-390782: state=Stopped err=<nil>
	I0815 01:28:27.791773   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	W0815 01:28:27.791930   66919 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:27.793654   66919 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-390782" ...
	I0815 01:28:27.794650   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .Start
	I0815 01:28:27.794798   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring networks are active...
	I0815 01:28:27.795554   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network default is active
	I0815 01:28:27.795835   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network mk-old-k8s-version-390782 is active
	I0815 01:28:27.796194   66919 main.go:141] libmachine: (old-k8s-version-390782) Getting domain xml...
	I0815 01:28:27.797069   66919 main.go:141] libmachine: (old-k8s-version-390782) Creating domain...
	I0815 01:28:28.999562   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting to get IP...
	I0815 01:28:29.000288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.000697   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.000787   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.000698   67979 retry.go:31] will retry after 209.337031ms: waiting for machine to come up
	I0815 01:28:29.212345   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.212839   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.212865   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.212796   67979 retry.go:31] will retry after 252.542067ms: waiting for machine to come up
	I0815 01:28:29.467274   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.467659   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.467685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.467607   67979 retry.go:31] will retry after 412.932146ms: waiting for machine to come up
	I0815 01:28:29.882217   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.882643   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.882672   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.882601   67979 retry.go:31] will retry after 526.991017ms: waiting for machine to come up
	I0815 01:28:30.411443   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:30.411819   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:30.411881   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:30.411794   67979 retry.go:31] will retry after 758.953861ms: waiting for machine to come up
	I0815 01:28:31.172721   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.173099   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.173131   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.173045   67979 retry.go:31] will retry after 607.740613ms: waiting for machine to come up
	I0815 01:28:31.782922   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.783406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.783434   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.783343   67979 retry.go:31] will retry after 738.160606ms: waiting for machine to come up
	I0815 01:28:32.523257   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:32.523685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:32.523716   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:32.523625   67979 retry.go:31] will retry after 904.54249ms: waiting for machine to come up
	I0815 01:28:33.430286   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:33.430690   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:33.430722   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:33.430637   67979 retry.go:31] will retry after 1.55058959s: waiting for machine to come up
	I0815 01:28:34.983386   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:34.983838   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:34.983870   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:34.983788   67979 retry.go:31] will retry after 1.636768205s: waiting for machine to come up
	I0815 01:28:36.622595   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:36.623058   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:36.623083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:36.622994   67979 retry.go:31] will retry after 1.777197126s: waiting for machine to come up
	I0815 01:28:38.401812   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:38.402289   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:38.402319   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:38.402247   67979 retry.go:31] will retry after 3.186960364s: waiting for machine to come up
	I0815 01:28:41.592635   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:41.593067   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:41.593093   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:41.593018   67979 retry.go:31] will retry after 3.613524245s: waiting for machine to come up
	I0815 01:28:45.209122   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.209617   66919 main.go:141] libmachine: (old-k8s-version-390782) Found IP for machine: 192.168.50.21
	I0815 01:28:45.209639   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserving static IP address...
	I0815 01:28:45.209657   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has current primary IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.210115   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserved static IP address: 192.168.50.21
	I0815 01:28:45.210138   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting for SSH to be available...
	I0815 01:28:45.210160   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.210188   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | skip adding static IP to network mk-old-k8s-version-390782 - found existing host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"}
	I0815 01:28:45.210204   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Getting to WaitForSSH function...
	I0815 01:28:45.212727   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213127   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.213153   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213307   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH client type: external
	I0815 01:28:45.213354   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa (-rw-------)
	I0815 01:28:45.213388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:28:45.213406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | About to run SSH command:
	I0815 01:28:45.213437   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | exit 0
	I0815 01:28:45.340616   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | SSH cmd err, output: <nil>: 
	I0815 01:28:45.341118   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetConfigRaw
	I0815 01:28:45.341848   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.344534   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.344934   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.344967   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.345196   66919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/config.json ...
	I0815 01:28:45.345414   66919 machine.go:94] provisionDockerMachine start ...
	I0815 01:28:45.345433   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:45.345699   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.347935   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348249   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.348278   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348438   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.348609   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348797   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348957   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.349117   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.349324   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.349337   66919 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:28:45.456668   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:28:45.456701   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.456959   66919 buildroot.go:166] provisioning hostname "old-k8s-version-390782"
	I0815 01:28:45.456987   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.457148   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.460083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460425   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.460453   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460613   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.460783   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.460924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.461039   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.461180   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.461392   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.461416   66919 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-390782 && echo "old-k8s-version-390782" | sudo tee /etc/hostname
	I0815 01:28:45.582108   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-390782
	
	I0815 01:28:45.582136   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.585173   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585556   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.585590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585795   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.585989   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586131   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586253   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.586445   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.586648   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.586667   66919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-390782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-390782/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-390782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:28:45.700737   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:45.700778   66919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:28:45.700802   66919 buildroot.go:174] setting up certificates
	I0815 01:28:45.700812   66919 provision.go:84] configureAuth start
	I0815 01:28:45.700821   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.701079   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.704006   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704384   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.704416   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704593   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.706737   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707018   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.707041   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707213   66919 provision.go:143] copyHostCerts
	I0815 01:28:45.707299   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:28:45.707324   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:28:45.707408   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:28:45.707528   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:28:45.707537   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:28:45.707576   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:28:45.707657   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:28:45.707666   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:28:45.707701   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:28:45.707771   66919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-390782 san=[127.0.0.1 192.168.50.21 localhost minikube old-k8s-version-390782]
	I0815 01:28:45.787190   66919 provision.go:177] copyRemoteCerts
	I0815 01:28:45.787256   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:28:45.787287   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.790159   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790542   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.790590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790735   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.790924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.791097   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.791217   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:45.874561   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:28:45.897869   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 01:28:45.923862   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:28:45.950038   66919 provision.go:87] duration metric: took 249.211016ms to configureAuth
	I0815 01:28:45.950065   66919 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:28:45.950301   66919 config.go:182] Loaded profile config "old-k8s-version-390782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:28:45.950412   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.953288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953746   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.953778   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953902   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.954098   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954358   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954569   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.954784   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.954953   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.954967   66919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:28:46.228321   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:28:46.228349   66919 machine.go:97] duration metric: took 882.921736ms to provisionDockerMachine
	I0815 01:28:46.228363   66919 start.go:293] postStartSetup for "old-k8s-version-390782" (driver="kvm2")
	I0815 01:28:46.228375   66919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:28:46.228401   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.228739   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:28:46.228774   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.231605   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.231993   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.232020   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.232216   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.232419   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.232698   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.232919   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.319433   66919 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:28:46.323340   66919 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:28:46.323373   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:28:46.323451   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:28:46.323555   66919 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:28:46.323658   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:28:46.332594   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:46.354889   66919 start.go:296] duration metric: took 126.511194ms for postStartSetup
	I0815 01:28:46.354930   66919 fix.go:56] duration metric: took 18.581671847s for fixHost
	I0815 01:28:46.354950   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.357987   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358251   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.358277   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358509   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.358747   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.358934   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.359092   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.359240   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:46.359425   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:46.359438   66919 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 01:28:46.469167   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685326.429908383
	
	I0815 01:28:46.469192   66919 fix.go:216] guest clock: 1723685326.429908383
	I0815 01:28:46.469202   66919 fix.go:229] Guest: 2024-08-15 01:28:46.429908383 +0000 UTC Remote: 2024-08-15 01:28:46.354934297 +0000 UTC m=+256.257437765 (delta=74.974086ms)
	I0815 01:28:46.469231   66919 fix.go:200] guest clock delta is within tolerance: 74.974086ms
	I0815 01:28:46.469236   66919 start.go:83] releasing machines lock for "old-k8s-version-390782", held for 18.696013068s
	I0815 01:28:46.469264   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.469527   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:46.472630   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473053   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.473082   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473746   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473931   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473998   66919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:28:46.474048   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.474159   66919 ssh_runner.go:195] Run: cat /version.json
	I0815 01:28:46.474188   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.476984   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477012   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477421   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477445   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477465   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477499   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477615   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477719   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477784   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477845   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477907   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477975   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.478048   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.585745   66919 ssh_runner.go:195] Run: systemctl --version
	I0815 01:28:46.592135   66919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:28:46.731888   66919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:28:46.739171   66919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:28:46.739238   66919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:28:46.760211   66919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:28:46.760232   66919 start.go:495] detecting cgroup driver to use...
	I0815 01:28:46.760316   66919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:28:46.778483   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:28:46.791543   66919 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:28:46.791632   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:28:46.804723   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:28:46.818794   66919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:28:46.931242   66919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:28:47.091098   66919 docker.go:233] disabling docker service ...
	I0815 01:28:47.091177   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:28:47.105150   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:28:47.117485   66919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:28:47.236287   66919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:28:47.376334   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:28:47.389397   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:28:47.406551   66919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 01:28:47.406627   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.416736   66919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:28:47.416803   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.427000   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.437833   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.449454   66919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:28:47.460229   66919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:28:47.469737   66919 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:28:47.469800   66919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:28:47.482270   66919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:28:47.491987   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:47.624462   66919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:28:47.759485   66919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:28:47.759546   66919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:28:47.764492   66919 start.go:563] Will wait 60s for crictl version
	I0815 01:28:47.764545   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:47.767890   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:28:47.814241   66919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:28:47.814342   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.842933   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.873241   66919 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 01:28:47.874283   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:47.877389   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.877763   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:47.877793   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.878008   66919 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 01:28:47.881794   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:47.893270   66919 kubeadm.go:883] updating cluster {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:28:47.893412   66919 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 01:28:47.893466   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:47.939402   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:47.939489   66919 ssh_runner.go:195] Run: which lz4
	I0815 01:28:47.943142   66919 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 01:28:47.947165   66919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:28:47.947191   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 01:28:49.418409   66919 crio.go:462] duration metric: took 1.475291539s to copy over tarball
	I0815 01:28:49.418479   66919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:28:52.212767   66919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.794261663s)
	I0815 01:28:52.212795   66919 crio.go:469] duration metric: took 2.794358617s to extract the tarball
	I0815 01:28:52.212803   66919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:28:52.254542   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:52.286548   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:52.286571   66919 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:28:52.286651   66919 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.286675   66919 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 01:28:52.286687   66919 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.286684   66919 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.286704   66919 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.286645   66919 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.286672   66919 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.286649   66919 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288433   66919 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.288441   66919 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.288473   66919 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.288446   66919 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.288429   66919 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.288633   66919 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 01:28:52.526671   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 01:28:52.548397   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.556168   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.560115   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.563338   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.566306   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.576900   66919 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 01:28:52.576955   66919 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 01:28:52.576999   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.579694   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.639727   66919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 01:28:52.639778   66919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.639828   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.697299   66919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 01:28:52.697346   66919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.697397   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.709988   66919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 01:28:52.710026   66919 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 01:28:52.710051   66919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.710072   66919 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.710101   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710109   66919 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 01:28:52.710121   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710128   66919 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.710132   66919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 01:28:52.710146   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.710102   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.710159   66919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.710177   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.710159   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710198   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.768699   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.768764   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.768837   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.768892   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.768933   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.768954   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.800404   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.893131   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.893174   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.893241   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.918186   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.918203   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.918205   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.946507   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:53.037776   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:53.037991   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:53.039379   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:53.077479   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:53.077542   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 01:28:53.077559   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 01:28:53.096763   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 01:28:53.138129   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:53.153330   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 01:28:53.153366   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 01:28:53.153368   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 01:28:53.162469   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 01:28:53.292377   66919 cache_images.go:92] duration metric: took 1.005786902s to LoadCachedImages
	W0815 01:28:53.292485   66919 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0815 01:28:53.292503   66919 kubeadm.go:934] updating node { 192.168.50.21 8443 v1.20.0 crio true true} ...
	I0815 01:28:53.292682   66919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-390782 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:28:53.292781   66919 ssh_runner.go:195] Run: crio config
	I0815 01:28:53.339927   66919 cni.go:84] Creating CNI manager for ""
	I0815 01:28:53.339957   66919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:28:53.339979   66919 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:28:53.340009   66919 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.21 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-390782 NodeName:old-k8s-version-390782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 01:28:53.340183   66919 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-390782"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:28:53.340278   66919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 01:28:53.350016   66919 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:28:53.350117   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:28:53.359379   66919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 01:28:53.375719   66919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:28:53.392054   66919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 01:28:53.409122   66919 ssh_runner.go:195] Run: grep 192.168.50.21	control-plane.minikube.internal$ /etc/hosts
	I0815 01:28:53.412646   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:53.423917   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:53.560712   66919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:28:53.576488   66919 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782 for IP: 192.168.50.21
	I0815 01:28:53.576512   66919 certs.go:194] generating shared ca certs ...
	I0815 01:28:53.576530   66919 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:53.576748   66919 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:28:53.576823   66919 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:28:53.576837   66919 certs.go:256] generating profile certs ...
	I0815 01:28:53.576975   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.key
	I0815 01:28:53.577044   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key.d79afed6
	I0815 01:28:53.577113   66919 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key
	I0815 01:28:53.577274   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:28:53.577323   66919 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:28:53.577337   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:28:53.577369   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:28:53.577400   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:28:53.577431   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:28:53.577529   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:53.578239   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:28:53.622068   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:28:53.648947   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:28:53.681678   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:28:53.719636   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 01:28:53.744500   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:28:53.777941   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:28:53.810631   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:28:53.832906   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:28:53.854487   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:28:53.876448   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:28:53.898487   66919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:28:53.914102   66919 ssh_runner.go:195] Run: openssl version
	I0815 01:28:53.919563   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:28:53.929520   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933730   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933775   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.939056   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:28:53.948749   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:28:53.958451   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962624   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962669   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.967800   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:28:53.977228   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:28:53.986801   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990797   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990842   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.995930   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:28:54.005862   66919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:28:54.010115   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:28:54.015861   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:28:54.021980   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:28:54.028344   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:28:54.034172   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:28:54.040316   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:28:54.046525   66919 kubeadm.go:392] StartCluster: {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:28:54.046624   66919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:28:54.046671   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.086420   66919 cri.go:89] found id: ""
	I0815 01:28:54.086498   66919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:28:54.096425   66919 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:28:54.096449   66919 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:28:54.096500   66919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:28:54.106217   66919 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:28:54.107254   66919 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-390782" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:28:54.107872   66919 kubeconfig.go:62] /home/jenkins/minikube-integration/19443-13088/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-390782" cluster setting kubeconfig missing "old-k8s-version-390782" context setting]
	I0815 01:28:54.109790   66919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:54.140029   66919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:28:54.150180   66919 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.21
	I0815 01:28:54.150237   66919 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:28:54.150251   66919 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:28:54.150308   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.186400   66919 cri.go:89] found id: ""
	I0815 01:28:54.186485   66919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:28:54.203351   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:28:54.212828   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:28:54.212849   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:28:54.212910   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:28:54.221577   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:28:54.221641   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:28:54.230730   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:28:54.239213   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:28:54.239279   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:28:54.248268   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.256909   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:28:54.256968   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.266043   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:28:54.276366   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:28:54.276432   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:28:54.285945   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:28:54.295262   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:54.419237   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.098102   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.318597   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.420419   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.514727   66919 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:28:55.514825   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.015883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.515816   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.015709   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.515895   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.015127   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.515796   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.014975   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.515893   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:00.015918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:00.514933   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.015014   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.515780   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.015534   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.515502   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.015539   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.515643   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.015544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.515786   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:05.015882   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:05.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.015647   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.514952   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.014969   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.515614   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.015757   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.515184   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.014931   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.515381   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.015761   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.515131   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.515740   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.015002   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.515169   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.015676   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.515330   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.015193   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.515742   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.015837   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.515901   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.015290   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.514956   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.015924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.515782   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.014890   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.515482   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.515830   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.015304   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.515183   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.015283   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.515686   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.015404   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.515935   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.015577   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.515114   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.015146   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.515849   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:25.014883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:25.515881   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.015741   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.515122   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.014889   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.515108   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.015604   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.515658   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.015319   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.515225   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.015561   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.515518   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.015099   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.514899   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.015422   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.515483   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.015471   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.515843   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.015059   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.514953   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.015692   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.514869   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.015361   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.515461   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.015560   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.514995   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.015431   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.515382   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.014971   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.515702   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:40.015185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:40.514981   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.015724   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.515316   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.515738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.515747   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.015794   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:45.015384   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:45.515828   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.015564   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.515829   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.014916   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.515308   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.014871   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.515182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.015946   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.514892   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.015788   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.515037   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.015346   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.514948   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.015826   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.514876   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.015522   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.515665   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.015480   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.515202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:55.014921   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:55.515921   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:55.516020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:55.556467   66919 cri.go:89] found id: ""
	I0815 01:29:55.556495   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.556506   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:55.556514   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:55.556584   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:55.591203   66919 cri.go:89] found id: ""
	I0815 01:29:55.591227   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.591234   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:55.591240   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:55.591319   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:55.628819   66919 cri.go:89] found id: ""
	I0815 01:29:55.628847   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.628858   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:55.628865   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:55.628934   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:55.673750   66919 cri.go:89] found id: ""
	I0815 01:29:55.673779   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.673790   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:55.673798   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:55.673857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:55.717121   66919 cri.go:89] found id: ""
	I0815 01:29:55.717153   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.717164   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:55.717171   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:55.717233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:55.753387   66919 cri.go:89] found id: ""
	I0815 01:29:55.753415   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.753425   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:55.753434   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:55.753507   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:55.787148   66919 cri.go:89] found id: ""
	I0815 01:29:55.787183   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.787194   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:55.787207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:55.787272   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:55.820172   66919 cri.go:89] found id: ""
	I0815 01:29:55.820212   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.820226   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:55.820238   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:55.820260   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:55.869089   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:55.869120   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:55.882614   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:55.882644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:56.004286   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:56.004364   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:56.004382   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:56.077836   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:56.077873   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:58.628976   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:58.642997   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:58.643074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:58.675870   66919 cri.go:89] found id: ""
	I0815 01:29:58.675906   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.675916   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:58.675921   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:58.675971   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:58.708231   66919 cri.go:89] found id: ""
	I0815 01:29:58.708263   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.708271   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:58.708277   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:58.708347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:58.744121   66919 cri.go:89] found id: ""
	I0815 01:29:58.744151   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.744162   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:58.744169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:58.744231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:58.783191   66919 cri.go:89] found id: ""
	I0815 01:29:58.783225   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.783238   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:58.783246   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:58.783315   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:58.821747   66919 cri.go:89] found id: ""
	I0815 01:29:58.821775   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.821785   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:58.821801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:58.821865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:58.859419   66919 cri.go:89] found id: ""
	I0815 01:29:58.859450   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.859458   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:58.859463   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:58.859520   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:58.900959   66919 cri.go:89] found id: ""
	I0815 01:29:58.900988   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.900999   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:58.901006   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:58.901069   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:58.940714   66919 cri.go:89] found id: ""
	I0815 01:29:58.940746   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.940758   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:58.940779   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:58.940796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:58.956973   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:58.957004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:59.024399   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:59.024426   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:59.024439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:59.106170   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:59.106210   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:59.142151   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:59.142181   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:01.696371   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:01.709675   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:01.709748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:01.747907   66919 cri.go:89] found id: ""
	I0815 01:30:01.747934   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.747941   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:01.747949   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:01.748009   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:01.785404   66919 cri.go:89] found id: ""
	I0815 01:30:01.785429   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.785437   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:01.785442   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:01.785499   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:01.820032   66919 cri.go:89] found id: ""
	I0815 01:30:01.820060   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.820068   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:01.820073   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:01.820134   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:01.853219   66919 cri.go:89] found id: ""
	I0815 01:30:01.853257   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.853268   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:01.853276   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:01.853331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:01.895875   66919 cri.go:89] found id: ""
	I0815 01:30:01.895903   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.895915   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:01.895922   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:01.895983   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:01.929753   66919 cri.go:89] found id: ""
	I0815 01:30:01.929785   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.929796   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:01.929803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:01.929865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:01.961053   66919 cri.go:89] found id: ""
	I0815 01:30:01.961087   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.961099   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:01.961107   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:01.961174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:01.993217   66919 cri.go:89] found id: ""
	I0815 01:30:01.993247   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.993258   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:01.993268   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:01.993287   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:02.051367   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:02.051400   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:02.065818   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:02.065851   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:02.150692   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:02.150721   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:02.150738   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:02.262369   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:02.262406   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:04.813873   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:04.829471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:04.829549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:04.871020   66919 cri.go:89] found id: ""
	I0815 01:30:04.871049   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.871058   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:04.871064   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:04.871131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:04.924432   66919 cri.go:89] found id: ""
	I0815 01:30:04.924462   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.924474   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:04.924480   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:04.924543   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:04.972947   66919 cri.go:89] found id: ""
	I0815 01:30:04.972979   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.972991   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:04.972999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:04.973123   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:05.004748   66919 cri.go:89] found id: ""
	I0815 01:30:05.004772   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.004780   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:05.004785   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:05.004850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:05.036064   66919 cri.go:89] found id: ""
	I0815 01:30:05.036093   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.036103   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:05.036110   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:05.036174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:05.074397   66919 cri.go:89] found id: ""
	I0815 01:30:05.074430   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.074457   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:05.074467   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:05.074527   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:05.110796   66919 cri.go:89] found id: ""
	I0815 01:30:05.110821   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.110830   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:05.110836   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:05.110897   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:05.148938   66919 cri.go:89] found id: ""
	I0815 01:30:05.148960   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.148968   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:05.148976   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:05.148986   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:05.202523   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:05.202553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:05.215903   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:05.215935   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:05.294685   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:05.294709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:05.294724   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:05.397494   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:05.397529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:07.946734   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.967265   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:07.967341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:08.005761   66919 cri.go:89] found id: ""
	I0815 01:30:08.005792   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.005808   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:08.005814   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:08.005878   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:08.044124   66919 cri.go:89] found id: ""
	I0815 01:30:08.044154   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.044166   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:08.044173   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:08.044238   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:08.078729   66919 cri.go:89] found id: ""
	I0815 01:30:08.078757   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.078769   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:08.078777   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:08.078841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:08.121988   66919 cri.go:89] found id: ""
	I0815 01:30:08.122020   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.122035   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:08.122042   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:08.122108   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:08.156930   66919 cri.go:89] found id: ""
	I0815 01:30:08.156956   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.156964   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:08.156969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:08.157034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:08.201008   66919 cri.go:89] found id: ""
	I0815 01:30:08.201049   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.201060   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:08.201067   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:08.201128   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:08.241955   66919 cri.go:89] found id: ""
	I0815 01:30:08.241979   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.241987   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:08.241993   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:08.242041   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:08.277271   66919 cri.go:89] found id: ""
	I0815 01:30:08.277307   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.277317   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:08.277328   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:08.277343   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:08.339037   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:08.339082   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:08.355588   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:08.355617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:08.436131   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:08.436157   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:08.436170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:08.541231   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:08.541267   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:11.090797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:11.105873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:11.105951   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:11.139481   66919 cri.go:89] found id: ""
	I0815 01:30:11.139509   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.139520   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:11.139528   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:11.139586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:11.176291   66919 cri.go:89] found id: ""
	I0815 01:30:11.176320   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.176329   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:11.176336   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:11.176408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:11.212715   66919 cri.go:89] found id: ""
	I0815 01:30:11.212750   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.212760   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:11.212766   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:11.212824   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:11.247283   66919 cri.go:89] found id: ""
	I0815 01:30:11.247311   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.247321   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:11.247328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:11.247391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:11.280285   66919 cri.go:89] found id: ""
	I0815 01:30:11.280319   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.280332   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:11.280339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:11.280407   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:11.317883   66919 cri.go:89] found id: ""
	I0815 01:30:11.317911   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.317930   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:11.317937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:11.317998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:11.355178   66919 cri.go:89] found id: ""
	I0815 01:30:11.355208   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.355220   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:11.355227   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:11.355287   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:11.390965   66919 cri.go:89] found id: ""
	I0815 01:30:11.390992   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.391004   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:11.391015   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:11.391030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:11.445967   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:11.446004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:11.460539   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:11.460570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:11.537022   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:11.537043   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:11.537058   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:11.625438   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:11.625476   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.175870   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:14.189507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:14.189576   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:14.225227   66919 cri.go:89] found id: ""
	I0815 01:30:14.225255   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.225264   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:14.225271   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:14.225350   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:14.260247   66919 cri.go:89] found id: ""
	I0815 01:30:14.260276   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.260286   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:14.260294   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:14.260364   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:14.295498   66919 cri.go:89] found id: ""
	I0815 01:30:14.295528   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.295538   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:14.295552   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:14.295617   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:14.334197   66919 cri.go:89] found id: ""
	I0815 01:30:14.334228   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.334239   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:14.334247   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:14.334308   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:14.376198   66919 cri.go:89] found id: ""
	I0815 01:30:14.376232   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.376244   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:14.376252   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:14.376313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:14.416711   66919 cri.go:89] found id: ""
	I0815 01:30:14.416744   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.416755   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:14.416763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:14.416823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:14.453890   66919 cri.go:89] found id: ""
	I0815 01:30:14.453917   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.453930   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:14.453952   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:14.454024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:14.497742   66919 cri.go:89] found id: ""
	I0815 01:30:14.497768   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.497776   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:14.497787   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:14.497803   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:14.511938   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:14.511980   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:14.583464   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:14.583490   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:14.583510   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:14.683497   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:14.683540   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.724290   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:14.724327   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:17.277116   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:17.290745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:17.290825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:17.324477   66919 cri.go:89] found id: ""
	I0815 01:30:17.324505   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.324512   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:17.324517   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:17.324573   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:17.356340   66919 cri.go:89] found id: ""
	I0815 01:30:17.356373   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.356384   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:17.356392   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:17.356452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:17.392696   66919 cri.go:89] found id: ""
	I0815 01:30:17.392722   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.392732   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:17.392740   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:17.392802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:17.425150   66919 cri.go:89] found id: ""
	I0815 01:30:17.425182   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.425192   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:17.425200   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:17.425266   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:17.460679   66919 cri.go:89] found id: ""
	I0815 01:30:17.460708   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.460720   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:17.460727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:17.460805   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:17.496881   66919 cri.go:89] found id: ""
	I0815 01:30:17.496914   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.496927   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:17.496933   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:17.496985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:17.528614   66919 cri.go:89] found id: ""
	I0815 01:30:17.528643   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.528668   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:17.528676   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:17.528736   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:17.563767   66919 cri.go:89] found id: ""
	I0815 01:30:17.563792   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.563799   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:17.563809   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:17.563824   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:17.576591   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:17.576619   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:17.647791   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:17.647819   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:17.647832   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:17.722889   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:17.722927   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:17.761118   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:17.761154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:20.316550   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:20.329377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:20.329452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:20.361773   66919 cri.go:89] found id: ""
	I0815 01:30:20.361805   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.361814   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:20.361820   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:20.361880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:20.394091   66919 cri.go:89] found id: ""
	I0815 01:30:20.394127   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.394138   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:20.394145   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:20.394210   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:20.426882   66919 cri.go:89] found id: ""
	I0815 01:30:20.426910   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.426929   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:20.426937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:20.426998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:20.460629   66919 cri.go:89] found id: ""
	I0815 01:30:20.460678   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.460692   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:20.460699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:20.460764   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:20.492030   66919 cri.go:89] found id: ""
	I0815 01:30:20.492055   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.492063   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:20.492069   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:20.492127   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:20.523956   66919 cri.go:89] found id: ""
	I0815 01:30:20.523986   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.523994   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:20.523999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:20.524058   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:20.556577   66919 cri.go:89] found id: ""
	I0815 01:30:20.556606   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.556617   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:20.556633   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:20.556714   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:20.589322   66919 cri.go:89] found id: ""
	I0815 01:30:20.589357   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.589366   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:20.589374   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:20.589386   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:20.666950   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:20.666993   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:20.703065   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:20.703104   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:20.758120   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:20.758154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:20.773332   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:20.773378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:20.839693   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.340487   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:23.352978   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:23.353034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:23.386376   66919 cri.go:89] found id: ""
	I0815 01:30:23.386401   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.386411   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:23.386418   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:23.386480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:23.422251   66919 cri.go:89] found id: ""
	I0815 01:30:23.422275   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.422283   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:23.422288   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:23.422347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:23.454363   66919 cri.go:89] found id: ""
	I0815 01:30:23.454394   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.454405   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:23.454410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:23.454471   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:23.487211   66919 cri.go:89] found id: ""
	I0815 01:30:23.487240   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.487249   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:23.487255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:23.487313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:23.518655   66919 cri.go:89] found id: ""
	I0815 01:30:23.518680   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.518690   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:23.518695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:23.518749   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:23.553449   66919 cri.go:89] found id: ""
	I0815 01:30:23.553479   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.553489   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:23.553497   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:23.553549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:23.582407   66919 cri.go:89] found id: ""
	I0815 01:30:23.582443   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.582459   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:23.582466   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:23.582519   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:23.612805   66919 cri.go:89] found id: ""
	I0815 01:30:23.612839   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.612849   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:23.612861   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:23.612874   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:23.661661   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:23.661691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:23.674456   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:23.674491   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:23.742734   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.742758   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:23.742772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:23.828791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:23.828830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:26.364924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:26.378354   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:26.378422   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:26.410209   66919 cri.go:89] found id: ""
	I0815 01:30:26.410238   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.410248   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:26.410253   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:26.410299   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:26.443885   66919 cri.go:89] found id: ""
	I0815 01:30:26.443918   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.443929   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:26.443935   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:26.443985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:26.475786   66919 cri.go:89] found id: ""
	I0815 01:30:26.475815   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.475826   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:26.475833   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:26.475898   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:26.510635   66919 cri.go:89] found id: ""
	I0815 01:30:26.510660   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.510669   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:26.510677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:26.510739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:26.542755   66919 cri.go:89] found id: ""
	I0815 01:30:26.542779   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.542787   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:26.542792   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:26.542842   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:26.574825   66919 cri.go:89] found id: ""
	I0815 01:30:26.574896   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.574911   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:26.574919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:26.574979   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:26.612952   66919 cri.go:89] found id: ""
	I0815 01:30:26.612980   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.612991   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:26.612998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:26.613067   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:26.645339   66919 cri.go:89] found id: ""
	I0815 01:30:26.645377   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.645388   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:26.645398   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:26.645415   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:26.659206   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:26.659243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:26.727526   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:26.727552   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:26.727569   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.811277   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:26.811314   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:26.851236   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:26.851270   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.402571   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:29.415017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:29.415095   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:29.448130   66919 cri.go:89] found id: ""
	I0815 01:30:29.448151   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.448159   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:29.448164   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:29.448213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:29.484156   66919 cri.go:89] found id: ""
	I0815 01:30:29.484186   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.484195   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:29.484200   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:29.484248   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:29.519760   66919 cri.go:89] found id: ""
	I0815 01:30:29.519796   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.519806   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:29.519812   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:29.519864   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:29.551336   66919 cri.go:89] found id: ""
	I0815 01:30:29.551363   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.551372   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:29.551377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:29.551428   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:29.584761   66919 cri.go:89] found id: ""
	I0815 01:30:29.584793   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.584804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:29.584811   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:29.584875   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:29.619310   66919 cri.go:89] found id: ""
	I0815 01:30:29.619335   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.619343   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:29.619351   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:29.619408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:29.653976   66919 cri.go:89] found id: ""
	I0815 01:30:29.654005   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.654016   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:29.654030   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:29.654104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:29.685546   66919 cri.go:89] found id: ""
	I0815 01:30:29.685581   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.685588   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:29.685598   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:29.685613   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:29.720766   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:29.720797   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.771174   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:29.771207   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:29.783951   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:29.783979   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:29.853602   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:29.853622   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:29.853634   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:32.434032   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:32.447831   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:32.447900   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:32.479056   66919 cri.go:89] found id: ""
	I0815 01:30:32.479086   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.479096   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:32.479102   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:32.479167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:32.511967   66919 cri.go:89] found id: ""
	I0815 01:30:32.512002   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.512014   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:32.512022   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:32.512094   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:32.547410   66919 cri.go:89] found id: ""
	I0815 01:30:32.547433   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.547441   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:32.547446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:32.547494   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:32.580829   66919 cri.go:89] found id: ""
	I0815 01:30:32.580857   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.580867   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:32.580874   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:32.580941   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:32.613391   66919 cri.go:89] found id: ""
	I0815 01:30:32.613502   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.613518   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:32.613529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:32.613619   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:32.645703   66919 cri.go:89] found id: ""
	I0815 01:30:32.645736   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.645747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:32.645754   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:32.645822   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:32.677634   66919 cri.go:89] found id: ""
	I0815 01:30:32.677667   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.677678   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:32.677685   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:32.677740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:32.708400   66919 cri.go:89] found id: ""
	I0815 01:30:32.708481   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.708506   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:32.708521   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:32.708538   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:32.759869   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:32.759907   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:32.773110   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:32.773131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:32.840010   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:32.840031   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:32.840045   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:32.915894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:32.915948   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:35.461001   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:35.473803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:35.473874   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:35.506510   66919 cri.go:89] found id: ""
	I0815 01:30:35.506532   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.506540   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:35.506546   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:35.506593   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:35.540988   66919 cri.go:89] found id: ""
	I0815 01:30:35.541018   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.541028   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:35.541033   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:35.541084   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:35.575687   66919 cri.go:89] found id: ""
	I0815 01:30:35.575713   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.575723   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:35.575730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:35.575789   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:35.606841   66919 cri.go:89] found id: ""
	I0815 01:30:35.606871   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.606878   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:35.606884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:35.606940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:35.641032   66919 cri.go:89] found id: ""
	I0815 01:30:35.641067   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.641079   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:35.641086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:35.641150   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:35.676347   66919 cri.go:89] found id: ""
	I0815 01:30:35.676381   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.676422   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:35.676433   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:35.676497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:35.713609   66919 cri.go:89] found id: ""
	I0815 01:30:35.713634   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.713648   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:35.713655   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:35.713739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:35.751057   66919 cri.go:89] found id: ""
	I0815 01:30:35.751083   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.751094   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:35.751104   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:35.751119   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:35.822909   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:35.822935   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:35.822950   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:35.904146   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:35.904186   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:35.942285   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:35.942316   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:35.990920   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:35.990959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.504900   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:38.518230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:38.518301   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:38.552402   66919 cri.go:89] found id: ""
	I0815 01:30:38.552428   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.552436   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:38.552441   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:38.552500   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:38.588617   66919 cri.go:89] found id: ""
	I0815 01:30:38.588643   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.588668   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:38.588677   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:38.588740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:38.621168   66919 cri.go:89] found id: ""
	I0815 01:30:38.621196   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.621204   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:38.621210   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:38.621258   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:38.654522   66919 cri.go:89] found id: ""
	I0815 01:30:38.654550   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.654559   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:38.654565   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:38.654631   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:38.688710   66919 cri.go:89] found id: ""
	I0815 01:30:38.688735   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.688743   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:38.688748   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:38.688802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:38.720199   66919 cri.go:89] found id: ""
	I0815 01:30:38.720224   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.720235   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:38.720242   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:38.720304   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:38.753996   66919 cri.go:89] found id: ""
	I0815 01:30:38.754026   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.754036   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:38.754043   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:38.754102   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:38.787488   66919 cri.go:89] found id: ""
	I0815 01:30:38.787514   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.787522   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:38.787530   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:38.787542   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:38.840062   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:38.840092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.854501   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:38.854543   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:38.933715   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:38.933749   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:38.933766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:39.010837   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:39.010871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:41.552027   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:41.566058   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:41.566136   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:41.603076   66919 cri.go:89] found id: ""
	I0815 01:30:41.603110   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.603123   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:41.603132   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:41.603201   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:41.637485   66919 cri.go:89] found id: ""
	I0815 01:30:41.637524   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.637536   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:41.637543   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:41.637609   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:41.671313   66919 cri.go:89] found id: ""
	I0815 01:30:41.671337   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.671345   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:41.671350   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:41.671399   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:41.704715   66919 cri.go:89] found id: ""
	I0815 01:30:41.704741   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.704752   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:41.704759   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:41.704821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:41.736357   66919 cri.go:89] found id: ""
	I0815 01:30:41.736388   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.736398   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:41.736405   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:41.736465   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:41.770373   66919 cri.go:89] found id: ""
	I0815 01:30:41.770401   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.770409   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:41.770415   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:41.770463   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:41.805965   66919 cri.go:89] found id: ""
	I0815 01:30:41.805990   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.805998   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:41.806003   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:41.806054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:41.841753   66919 cri.go:89] found id: ""
	I0815 01:30:41.841778   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.841786   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:41.841794   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:41.841805   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:41.914515   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:41.914539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:41.914557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:41.988345   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:41.988380   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:42.023814   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:42.023841   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:42.075210   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:42.075243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.589738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:44.602604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:44.602663   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:44.634203   66919 cri.go:89] found id: ""
	I0815 01:30:44.634236   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.634247   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:44.634254   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:44.634341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:44.683449   66919 cri.go:89] found id: ""
	I0815 01:30:44.683480   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.683490   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:44.683495   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:44.683563   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:44.716485   66919 cri.go:89] found id: ""
	I0815 01:30:44.716509   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.716520   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:44.716527   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:44.716595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:44.755708   66919 cri.go:89] found id: ""
	I0815 01:30:44.755737   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.755746   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:44.755755   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:44.755823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:44.791754   66919 cri.go:89] found id: ""
	I0815 01:30:44.791781   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.791790   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:44.791796   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:44.791867   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:44.825331   66919 cri.go:89] found id: ""
	I0815 01:30:44.825355   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.825363   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:44.825369   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:44.825416   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:44.861680   66919 cri.go:89] found id: ""
	I0815 01:30:44.861705   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.861713   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:44.861718   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:44.861770   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:44.898810   66919 cri.go:89] found id: ""
	I0815 01:30:44.898844   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.898857   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:44.898867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:44.898881   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:44.949416   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:44.949449   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.964230   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:44.964258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:45.038989   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:45.039012   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:45.039027   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:45.116311   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:45.116345   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:47.658176   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:47.671312   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:47.671375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:47.705772   66919 cri.go:89] found id: ""
	I0815 01:30:47.705800   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.705812   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:47.705819   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:47.705882   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:47.737812   66919 cri.go:89] found id: ""
	I0815 01:30:47.737846   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.737857   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:47.737864   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:47.737928   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:47.773079   66919 cri.go:89] found id: ""
	I0815 01:30:47.773103   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.773114   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:47.773121   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:47.773184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:47.804941   66919 cri.go:89] found id: ""
	I0815 01:30:47.804970   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.804980   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:47.804990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:47.805053   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:47.841215   66919 cri.go:89] found id: ""
	I0815 01:30:47.841249   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.841260   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:47.841266   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:47.841322   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:47.872730   66919 cri.go:89] found id: ""
	I0815 01:30:47.872761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.872772   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:47.872780   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:47.872833   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:47.905731   66919 cri.go:89] found id: ""
	I0815 01:30:47.905761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.905769   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:47.905774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:47.905825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:47.939984   66919 cri.go:89] found id: ""
	I0815 01:30:47.940017   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.940028   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:47.940040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:47.940053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:47.989493   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:47.989526   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:48.002567   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:48.002605   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:48.066691   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:48.066709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:48.066720   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:48.142512   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:48.142551   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:50.681288   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:50.695289   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:50.695358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:50.729264   66919 cri.go:89] found id: ""
	I0815 01:30:50.729293   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.729303   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:50.729310   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:50.729374   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:50.765308   66919 cri.go:89] found id: ""
	I0815 01:30:50.765337   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.765348   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:50.765354   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:50.765421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:50.801332   66919 cri.go:89] found id: ""
	I0815 01:30:50.801362   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.801382   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:50.801391   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:50.801452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:50.834822   66919 cri.go:89] found id: ""
	I0815 01:30:50.834855   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.834866   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:50.834873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:50.834937   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:50.868758   66919 cri.go:89] found id: ""
	I0815 01:30:50.868785   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.868804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:50.868817   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:50.868886   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:50.902003   66919 cri.go:89] found id: ""
	I0815 01:30:50.902035   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.902046   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:50.902053   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:50.902113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:50.934517   66919 cri.go:89] found id: ""
	I0815 01:30:50.934546   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.934562   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:50.934569   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:50.934628   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:50.968195   66919 cri.go:89] found id: ""
	I0815 01:30:50.968224   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.968233   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:50.968244   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:50.968258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:51.019140   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:51.019176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:51.032046   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:51.032072   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:51.109532   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:51.109555   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:51.109571   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:51.186978   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:51.187021   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:53.734145   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:53.747075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:53.747146   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:53.779774   66919 cri.go:89] found id: ""
	I0815 01:30:53.779800   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.779807   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:53.779812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:53.779861   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:53.813079   66919 cri.go:89] found id: ""
	I0815 01:30:53.813119   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.813130   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:53.813137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:53.813198   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:53.847148   66919 cri.go:89] found id: ""
	I0815 01:30:53.847179   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.847188   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:53.847195   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:53.847261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:53.880562   66919 cri.go:89] found id: ""
	I0815 01:30:53.880589   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.880596   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:53.880604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:53.880666   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:53.913334   66919 cri.go:89] found id: ""
	I0815 01:30:53.913364   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.913372   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:53.913378   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:53.913436   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:53.946008   66919 cri.go:89] found id: ""
	I0815 01:30:53.946042   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.946052   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:53.946057   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:53.946111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:53.978557   66919 cri.go:89] found id: ""
	I0815 01:30:53.978586   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.978595   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:53.978600   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:53.978653   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:54.010358   66919 cri.go:89] found id: ""
	I0815 01:30:54.010385   66919 logs.go:276] 0 containers: []
	W0815 01:30:54.010392   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:54.010401   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:54.010413   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:54.059780   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:54.059815   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:54.073397   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:54.073428   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:54.140996   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:54.141024   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:54.141039   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:54.215401   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:54.215437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:56.756848   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:56.769371   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:56.769434   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:56.806021   66919 cri.go:89] found id: ""
	I0815 01:30:56.806046   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.806076   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:56.806100   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:56.806170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:56.855347   66919 cri.go:89] found id: ""
	I0815 01:30:56.855377   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.855393   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:56.855400   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:56.855464   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:56.898669   66919 cri.go:89] found id: ""
	I0815 01:30:56.898700   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.898710   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:56.898717   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:56.898785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:56.955078   66919 cri.go:89] found id: ""
	I0815 01:30:56.955112   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.955124   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:56.955131   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:56.955205   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:56.987638   66919 cri.go:89] found id: ""
	I0815 01:30:56.987666   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.987674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:56.987680   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:56.987729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:57.019073   66919 cri.go:89] found id: ""
	I0815 01:30:57.019101   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.019109   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:57.019114   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:57.019170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:57.051695   66919 cri.go:89] found id: ""
	I0815 01:30:57.051724   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.051735   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:57.051742   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:57.051804   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:57.085066   66919 cri.go:89] found id: ""
	I0815 01:30:57.085095   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.085106   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:57.085117   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:57.085131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:57.134043   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:57.134080   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:57.147838   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:57.147871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:57.221140   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:57.221174   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:57.221190   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:57.302571   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:57.302607   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:59.841296   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:59.854638   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:59.854700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:59.885940   66919 cri.go:89] found id: ""
	I0815 01:30:59.885963   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.885971   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:59.885976   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:59.886026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:59.918783   66919 cri.go:89] found id: ""
	I0815 01:30:59.918812   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.918824   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:59.918832   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:59.918905   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:59.952122   66919 cri.go:89] found id: ""
	I0815 01:30:59.952153   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.952163   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:59.952169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:59.952233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:59.987303   66919 cri.go:89] found id: ""
	I0815 01:30:59.987331   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.987339   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:59.987344   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:59.987410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:00.024606   66919 cri.go:89] found id: ""
	I0815 01:31:00.024640   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.024666   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:00.024677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:00.024738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:00.055993   66919 cri.go:89] found id: ""
	I0815 01:31:00.056020   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.056031   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:00.056039   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:00.056104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:00.087128   66919 cri.go:89] found id: ""
	I0815 01:31:00.087161   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.087173   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:00.087180   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:00.087249   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:00.120436   66919 cri.go:89] found id: ""
	I0815 01:31:00.120465   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.120476   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:00.120488   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:00.120503   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:00.133810   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:00.133838   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:00.199949   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:00.199971   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:00.199984   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:00.284740   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:00.284778   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.321791   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:00.321827   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:02.873253   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:02.885846   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:02.885925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:02.924698   66919 cri.go:89] found id: ""
	I0815 01:31:02.924727   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.924739   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:02.924745   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:02.924807   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:02.961352   66919 cri.go:89] found id: ""
	I0815 01:31:02.961383   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.961391   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:02.961396   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:02.961450   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:02.996293   66919 cri.go:89] found id: ""
	I0815 01:31:02.996327   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.996334   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:02.996341   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:02.996391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:03.028976   66919 cri.go:89] found id: ""
	I0815 01:31:03.029005   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.029013   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:03.029019   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:03.029066   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:03.063388   66919 cri.go:89] found id: ""
	I0815 01:31:03.063425   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.063436   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:03.063445   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:03.063518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:03.099730   66919 cri.go:89] found id: ""
	I0815 01:31:03.099757   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.099767   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:03.099778   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:03.099841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:03.132347   66919 cri.go:89] found id: ""
	I0815 01:31:03.132370   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.132380   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:03.132386   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:03.132495   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:03.165120   66919 cri.go:89] found id: ""
	I0815 01:31:03.165146   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.165153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:03.165161   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:03.165173   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:03.217544   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:03.217576   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:03.232299   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:03.232341   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:03.297458   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:03.297484   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:03.297500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:03.377304   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:03.377338   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:05.915544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:05.929154   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:05.929231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:05.972008   66919 cri.go:89] found id: ""
	I0815 01:31:05.972037   66919 logs.go:276] 0 containers: []
	W0815 01:31:05.972048   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:05.972055   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:05.972119   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:06.005459   66919 cri.go:89] found id: ""
	I0815 01:31:06.005486   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.005494   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:06.005499   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:06.005550   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:06.037623   66919 cri.go:89] found id: ""
	I0815 01:31:06.037655   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.037666   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:06.037674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:06.037733   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:06.070323   66919 cri.go:89] found id: ""
	I0815 01:31:06.070347   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.070356   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:06.070361   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:06.070419   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:06.103570   66919 cri.go:89] found id: ""
	I0815 01:31:06.103593   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.103601   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:06.103606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:06.103654   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:06.136253   66919 cri.go:89] found id: ""
	I0815 01:31:06.136281   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.136291   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:06.136297   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:06.136356   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:06.170851   66919 cri.go:89] found id: ""
	I0815 01:31:06.170878   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.170890   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:06.170895   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:06.170942   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:06.205836   66919 cri.go:89] found id: ""
	I0815 01:31:06.205860   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.205867   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:06.205876   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:06.205892   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:06.282838   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:06.282872   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:06.323867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:06.323898   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:06.378187   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:06.378230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:06.393126   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:06.393160   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:06.460898   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:08.961182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:08.973963   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:08.974048   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:09.007466   66919 cri.go:89] found id: ""
	I0815 01:31:09.007494   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.007502   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:09.007509   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:09.007567   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:09.045097   66919 cri.go:89] found id: ""
	I0815 01:31:09.045123   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.045131   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:09.045137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:09.045187   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:09.078326   66919 cri.go:89] found id: ""
	I0815 01:31:09.078356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.078380   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:09.078389   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:09.078455   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:09.109430   66919 cri.go:89] found id: ""
	I0815 01:31:09.109460   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.109471   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:09.109478   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:09.109544   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:09.143200   66919 cri.go:89] found id: ""
	I0815 01:31:09.143225   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.143234   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:09.143239   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:09.143306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:09.179057   66919 cri.go:89] found id: ""
	I0815 01:31:09.179081   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.179089   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:09.179095   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:09.179141   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:09.213327   66919 cri.go:89] found id: ""
	I0815 01:31:09.213356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.213368   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:09.213375   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:09.213425   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:09.246716   66919 cri.go:89] found id: ""
	I0815 01:31:09.246745   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.246756   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:09.246763   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:09.246775   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:09.299075   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:09.299105   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:09.313023   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:09.313054   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:09.377521   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:09.377545   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:09.377557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:09.453791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:09.453830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:11.991473   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:12.004615   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:12.004707   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:12.045028   66919 cri.go:89] found id: ""
	I0815 01:31:12.045057   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.045066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:12.045072   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:12.045121   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:12.077887   66919 cri.go:89] found id: ""
	I0815 01:31:12.077910   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.077920   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:12.077926   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:12.077974   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:12.110214   66919 cri.go:89] found id: ""
	I0815 01:31:12.110249   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.110260   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:12.110268   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:12.110328   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:12.142485   66919 cri.go:89] found id: ""
	I0815 01:31:12.142509   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.142516   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:12.142522   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:12.142572   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:12.176921   66919 cri.go:89] found id: ""
	I0815 01:31:12.176951   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.176962   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:12.176969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:12.177030   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:12.212093   66919 cri.go:89] found id: ""
	I0815 01:31:12.212142   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.212154   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:12.212162   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:12.212216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:12.246980   66919 cri.go:89] found id: ""
	I0815 01:31:12.247007   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.247017   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:12.247024   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:12.247082   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:12.280888   66919 cri.go:89] found id: ""
	I0815 01:31:12.280918   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.280931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:12.280943   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:12.280959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:12.333891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:12.333923   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:12.346753   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:12.346783   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:12.415652   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:12.415675   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:12.415692   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:12.494669   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:12.494706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.031185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:15.044605   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:15.044704   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:15.081810   66919 cri.go:89] found id: ""
	I0815 01:31:15.081846   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.081860   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:15.081869   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:15.081932   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:15.113517   66919 cri.go:89] found id: ""
	I0815 01:31:15.113550   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.113562   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:15.113568   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:15.113641   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:15.147638   66919 cri.go:89] found id: ""
	I0815 01:31:15.147665   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.147673   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:15.147679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:15.147746   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:15.178938   66919 cri.go:89] found id: ""
	I0815 01:31:15.178966   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.178976   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:15.178990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:15.179054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:15.212304   66919 cri.go:89] found id: ""
	I0815 01:31:15.212333   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.212346   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:15.212353   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:15.212414   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:15.245991   66919 cri.go:89] found id: ""
	I0815 01:31:15.246012   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.246019   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:15.246025   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:15.246074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:15.280985   66919 cri.go:89] found id: ""
	I0815 01:31:15.281016   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.281034   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:15.281041   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:15.281105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:15.315902   66919 cri.go:89] found id: ""
	I0815 01:31:15.315939   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.315948   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:15.315958   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:15.315973   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:15.329347   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:15.329375   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:15.400366   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:15.400388   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:15.400405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:15.479074   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:15.479118   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.516204   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:15.516230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.070588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:18.083120   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:18.083196   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:18.115673   66919 cri.go:89] found id: ""
	I0815 01:31:18.115701   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.115709   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:18.115715   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:18.115772   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:18.147011   66919 cri.go:89] found id: ""
	I0815 01:31:18.147039   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.147047   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:18.147053   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:18.147126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:18.179937   66919 cri.go:89] found id: ""
	I0815 01:31:18.179960   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.179968   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:18.179973   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:18.180032   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:18.214189   66919 cri.go:89] found id: ""
	I0815 01:31:18.214216   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.214224   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:18.214230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:18.214289   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:18.252102   66919 cri.go:89] found id: ""
	I0815 01:31:18.252130   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.252137   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:18.252143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:18.252204   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:18.285481   66919 cri.go:89] found id: ""
	I0815 01:31:18.285519   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.285529   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:18.285536   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:18.285599   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:18.321609   66919 cri.go:89] found id: ""
	I0815 01:31:18.321636   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.321651   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:18.321660   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:18.321723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:18.352738   66919 cri.go:89] found id: ""
	I0815 01:31:18.352766   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.352774   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:18.352782   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:18.352796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.401481   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:18.401517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:18.414984   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:18.415016   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:18.485539   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:18.485559   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:18.485579   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:18.569611   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:18.569651   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:21.109609   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:21.123972   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:21.124038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:21.157591   66919 cri.go:89] found id: ""
	I0815 01:31:21.157624   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.157636   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:21.157643   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:21.157700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:21.192506   66919 cri.go:89] found id: ""
	I0815 01:31:21.192535   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.192545   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:21.192552   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:21.192623   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:21.224873   66919 cri.go:89] found id: ""
	I0815 01:31:21.224901   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.224912   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:21.224919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:21.224980   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:21.258398   66919 cri.go:89] found id: ""
	I0815 01:31:21.258427   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.258438   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:21.258446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:21.258513   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:21.295754   66919 cri.go:89] found id: ""
	I0815 01:31:21.295781   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.295792   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:21.295799   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:21.295870   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:21.330174   66919 cri.go:89] found id: ""
	I0815 01:31:21.330195   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.330202   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:21.330207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:21.330255   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:21.364565   66919 cri.go:89] found id: ""
	I0815 01:31:21.364588   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.364596   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:21.364639   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:21.364717   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:21.397889   66919 cri.go:89] found id: ""
	I0815 01:31:21.397920   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.397931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:21.397942   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:21.397961   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.471788   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:21.471822   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:21.508837   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:21.508867   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:21.560538   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:21.560575   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:21.575581   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:21.575622   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:21.647798   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.148566   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:24.160745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:24.160813   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:24.192535   66919 cri.go:89] found id: ""
	I0815 01:31:24.192558   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.192566   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:24.192572   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:24.192630   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:24.223468   66919 cri.go:89] found id: ""
	I0815 01:31:24.223499   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.223507   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:24.223513   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:24.223561   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:24.258905   66919 cri.go:89] found id: ""
	I0815 01:31:24.258931   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.258938   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:24.258944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:24.259006   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:24.298914   66919 cri.go:89] found id: ""
	I0815 01:31:24.298942   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.298949   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:24.298955   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:24.299011   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:24.331962   66919 cri.go:89] found id: ""
	I0815 01:31:24.331992   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.332003   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:24.332011   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:24.332078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:24.365984   66919 cri.go:89] found id: ""
	I0815 01:31:24.366014   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.366022   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:24.366028   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:24.366078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:24.402397   66919 cri.go:89] found id: ""
	I0815 01:31:24.402432   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.402442   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:24.402450   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:24.402516   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:24.434662   66919 cri.go:89] found id: ""
	I0815 01:31:24.434691   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.434704   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:24.434714   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:24.434730   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:24.474087   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:24.474117   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:24.524494   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:24.524533   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:24.537770   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:24.537795   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:24.608594   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.608634   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:24.608650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:27.191588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:27.206339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:27.206421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:27.241277   66919 cri.go:89] found id: ""
	I0815 01:31:27.241306   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.241315   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:27.241321   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:27.241385   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:27.275952   66919 cri.go:89] found id: ""
	I0815 01:31:27.275983   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.275992   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:27.275998   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:27.276060   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:27.308320   66919 cri.go:89] found id: ""
	I0815 01:31:27.308348   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.308359   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:27.308366   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:27.308424   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:27.340957   66919 cri.go:89] found id: ""
	I0815 01:31:27.340987   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.340998   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:27.341007   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:27.341135   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:27.373078   66919 cri.go:89] found id: ""
	I0815 01:31:27.373102   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.373110   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:27.373117   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:27.373182   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:27.409250   66919 cri.go:89] found id: ""
	I0815 01:31:27.409277   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.409289   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:27.409296   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:27.409358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:27.444244   66919 cri.go:89] found id: ""
	I0815 01:31:27.444270   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.444280   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:27.444287   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:27.444360   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:27.482507   66919 cri.go:89] found id: ""
	I0815 01:31:27.482535   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.482543   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:27.482552   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:27.482570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:27.521896   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:27.521931   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:27.575404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:27.575437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:27.587713   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:27.587745   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:27.650431   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:27.650461   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:27.650475   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:30.228663   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:30.242782   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:30.242852   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:30.278385   66919 cri.go:89] found id: ""
	I0815 01:31:30.278410   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.278420   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:30.278428   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:30.278483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:30.316234   66919 cri.go:89] found id: ""
	I0815 01:31:30.316258   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.316268   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:30.316276   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:30.316335   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:30.348738   66919 cri.go:89] found id: ""
	I0815 01:31:30.348767   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.348778   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:30.348787   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:30.348851   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:30.380159   66919 cri.go:89] found id: ""
	I0815 01:31:30.380189   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.380201   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:30.380208   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:30.380261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:30.414888   66919 cri.go:89] found id: ""
	I0815 01:31:30.414911   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.414919   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:30.414924   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:30.414977   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:30.447898   66919 cri.go:89] found id: ""
	I0815 01:31:30.447923   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.447931   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:30.447937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:30.448024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:30.479148   66919 cri.go:89] found id: ""
	I0815 01:31:30.479177   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.479187   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:30.479193   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:30.479245   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:30.511725   66919 cri.go:89] found id: ""
	I0815 01:31:30.511752   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.511760   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:30.511768   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:30.511780   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:30.562554   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:30.562590   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:30.575869   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:30.575896   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:30.642642   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:30.642662   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:30.642675   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:30.734491   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:30.734530   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:33.276918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:33.289942   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:33.290010   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:33.322770   66919 cri.go:89] found id: ""
	I0815 01:31:33.322799   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.322806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:33.322813   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:33.322862   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:33.359474   66919 cri.go:89] found id: ""
	I0815 01:31:33.359503   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.359513   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:33.359520   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:33.359590   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:33.391968   66919 cri.go:89] found id: ""
	I0815 01:31:33.391996   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.392007   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:33.392014   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:33.392076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:33.423830   66919 cri.go:89] found id: ""
	I0815 01:31:33.423853   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.423861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:33.423866   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:33.423914   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:33.454991   66919 cri.go:89] found id: ""
	I0815 01:31:33.455014   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.455022   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:33.455027   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:33.455076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:33.492150   66919 cri.go:89] found id: ""
	I0815 01:31:33.492173   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.492181   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:33.492187   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:33.492236   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:33.525206   66919 cri.go:89] found id: ""
	I0815 01:31:33.525237   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.525248   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:33.525255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:33.525331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:33.558939   66919 cri.go:89] found id: ""
	I0815 01:31:33.558973   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.558984   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:33.558995   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:33.559011   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:33.616977   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:33.617029   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:33.629850   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:33.629879   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:33.698029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:33.698052   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:33.698069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:33.776609   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:33.776641   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:36.320299   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:36.333429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:36.333492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:36.366810   66919 cri.go:89] found id: ""
	I0815 01:31:36.366846   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.366858   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:36.366866   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:36.366918   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:36.405898   66919 cri.go:89] found id: ""
	I0815 01:31:36.405930   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.405942   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:36.405949   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:36.406017   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:36.471396   66919 cri.go:89] found id: ""
	I0815 01:31:36.471432   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.471445   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:36.471453   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:36.471524   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:36.504319   66919 cri.go:89] found id: ""
	I0815 01:31:36.504355   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.504367   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:36.504373   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:36.504430   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:36.542395   66919 cri.go:89] found id: ""
	I0815 01:31:36.542423   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.542431   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:36.542437   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:36.542492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:36.576279   66919 cri.go:89] found id: ""
	I0815 01:31:36.576310   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.576320   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:36.576327   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:36.576391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:36.609215   66919 cri.go:89] found id: ""
	I0815 01:31:36.609243   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.609251   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:36.609256   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:36.609306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:36.641911   66919 cri.go:89] found id: ""
	I0815 01:31:36.641936   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.641944   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:36.641952   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:36.641964   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:36.691751   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:36.691784   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:36.704619   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:36.704644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:36.768328   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:36.768348   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:36.768360   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:36.843727   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:36.843759   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:39.381851   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:39.396205   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:39.396284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:39.430646   66919 cri.go:89] found id: ""
	I0815 01:31:39.430673   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.430681   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:39.430688   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:39.430751   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:39.468470   66919 cri.go:89] found id: ""
	I0815 01:31:39.468504   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.468517   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:39.468526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:39.468603   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:39.500377   66919 cri.go:89] found id: ""
	I0815 01:31:39.500407   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.500416   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:39.500423   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:39.500490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:39.532411   66919 cri.go:89] found id: ""
	I0815 01:31:39.532440   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.532447   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:39.532452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:39.532504   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:39.564437   66919 cri.go:89] found id: ""
	I0815 01:31:39.564463   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.564471   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:39.564476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:39.564528   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:39.598732   66919 cri.go:89] found id: ""
	I0815 01:31:39.598757   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.598765   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:39.598771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:39.598837   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:39.640429   66919 cri.go:89] found id: ""
	I0815 01:31:39.640457   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.640469   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:39.640476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:39.640536   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:39.672116   66919 cri.go:89] found id: ""
	I0815 01:31:39.672142   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.672151   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:39.672159   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:39.672171   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:39.721133   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:39.721170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:39.734024   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:39.734060   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:39.799465   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:39.799487   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:39.799501   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:39.880033   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:39.880068   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:42.421276   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:42.438699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:42.438760   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:42.473213   66919 cri.go:89] found id: ""
	I0815 01:31:42.473239   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.473246   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:42.473251   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:42.473311   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:42.509493   66919 cri.go:89] found id: ""
	I0815 01:31:42.509523   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.509533   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:42.509538   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:42.509594   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:42.543625   66919 cri.go:89] found id: ""
	I0815 01:31:42.543649   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.543659   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:42.543665   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:42.543731   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:42.581756   66919 cri.go:89] found id: ""
	I0815 01:31:42.581784   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.581794   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:42.581801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:42.581865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:42.615595   66919 cri.go:89] found id: ""
	I0815 01:31:42.615618   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.615626   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:42.615631   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:42.615689   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:42.652938   66919 cri.go:89] found id: ""
	I0815 01:31:42.652961   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.652973   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:42.652979   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:42.653026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:42.689362   66919 cri.go:89] found id: ""
	I0815 01:31:42.689391   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.689399   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:42.689406   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:42.689460   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:42.725880   66919 cri.go:89] found id: ""
	I0815 01:31:42.725903   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.725911   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:42.725920   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:42.725932   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:42.798531   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:42.798553   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:42.798567   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:42.878583   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:42.878617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:42.916218   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:42.916245   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:42.968613   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:42.968650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:45.482622   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:45.494847   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:45.494917   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:45.526105   66919 cri.go:89] found id: ""
	I0815 01:31:45.526130   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.526139   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:45.526145   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:45.526195   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:45.558218   66919 cri.go:89] found id: ""
	I0815 01:31:45.558247   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.558258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:45.558265   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:45.558327   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:45.589922   66919 cri.go:89] found id: ""
	I0815 01:31:45.589950   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.589961   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:45.589969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:45.590037   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:45.622639   66919 cri.go:89] found id: ""
	I0815 01:31:45.622670   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.622685   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:45.622690   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:45.622740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:45.659274   66919 cri.go:89] found id: ""
	I0815 01:31:45.659301   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.659309   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:45.659314   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:45.659362   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:45.690768   66919 cri.go:89] found id: ""
	I0815 01:31:45.690795   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.690804   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:45.690810   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:45.690860   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:45.726862   66919 cri.go:89] found id: ""
	I0815 01:31:45.726885   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.726892   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:45.726898   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:45.726943   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:45.761115   66919 cri.go:89] found id: ""
	I0815 01:31:45.761142   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.761153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:45.761164   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:45.761179   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:45.774290   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:45.774335   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:45.843029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:45.843053   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:45.843069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:45.918993   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:45.919032   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:45.955647   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:45.955685   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.506376   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:48.518173   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:48.518234   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:48.550773   66919 cri.go:89] found id: ""
	I0815 01:31:48.550798   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.550806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:48.550812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:48.550865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:48.582398   66919 cri.go:89] found id: ""
	I0815 01:31:48.582431   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.582442   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:48.582449   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:48.582512   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:48.613989   66919 cri.go:89] found id: ""
	I0815 01:31:48.614023   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.614036   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:48.614045   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:48.614114   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:48.645269   66919 cri.go:89] found id: ""
	I0815 01:31:48.645306   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.645317   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:48.645326   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:48.645394   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:48.680588   66919 cri.go:89] found id: ""
	I0815 01:31:48.680615   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.680627   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:48.680636   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:48.680723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:48.719580   66919 cri.go:89] found id: ""
	I0815 01:31:48.719607   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.719615   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:48.719621   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:48.719684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:48.756573   66919 cri.go:89] found id: ""
	I0815 01:31:48.756597   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.756606   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:48.756613   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:48.756684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:48.793983   66919 cri.go:89] found id: ""
	I0815 01:31:48.794018   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.794029   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:48.794040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:48.794053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.847776   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:48.847811   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:48.870731   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:48.870762   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:48.960519   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:48.960548   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:48.960565   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:49.037502   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:49.037535   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:51.576022   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:51.589531   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:51.589595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:51.623964   66919 cri.go:89] found id: ""
	I0815 01:31:51.623991   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.624000   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:51.624008   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:51.624074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:51.657595   66919 cri.go:89] found id: ""
	I0815 01:31:51.657618   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.657626   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:51.657632   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:51.657681   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:51.692462   66919 cri.go:89] found id: ""
	I0815 01:31:51.692490   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.692501   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:51.692507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:51.692570   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:51.724210   66919 cri.go:89] found id: ""
	I0815 01:31:51.724249   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.724259   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:51.724267   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:51.724329   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:51.756450   66919 cri.go:89] found id: ""
	I0815 01:31:51.756476   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.756486   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:51.756493   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:51.756555   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:51.789082   66919 cri.go:89] found id: ""
	I0815 01:31:51.789114   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.789126   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:51.789133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:51.789183   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:51.822390   66919 cri.go:89] found id: ""
	I0815 01:31:51.822420   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.822431   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:51.822438   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:51.822491   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:51.855977   66919 cri.go:89] found id: ""
	I0815 01:31:51.856004   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.856014   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:51.856025   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:51.856040   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:51.904470   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:51.904500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:51.918437   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:51.918466   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:51.991742   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:51.991770   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:51.991785   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:52.065894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:52.065926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:54.602000   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:54.616388   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:54.616466   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:54.675750   66919 cri.go:89] found id: ""
	I0815 01:31:54.675779   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.675793   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:54.675802   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:54.675857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:54.710581   66919 cri.go:89] found id: ""
	I0815 01:31:54.710609   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.710620   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:54.710627   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:54.710691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:54.747267   66919 cri.go:89] found id: ""
	I0815 01:31:54.747304   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.747316   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:54.747325   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:54.747387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:54.784175   66919 cri.go:89] found id: ""
	I0815 01:31:54.784209   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.784221   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:54.784230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:54.784295   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:54.820360   66919 cri.go:89] found id: ""
	I0815 01:31:54.820395   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.820405   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:54.820412   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:54.820480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:54.853176   66919 cri.go:89] found id: ""
	I0815 01:31:54.853204   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.853214   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:54.853222   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:54.853281   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:54.886063   66919 cri.go:89] found id: ""
	I0815 01:31:54.886092   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.886105   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:54.886112   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:54.886171   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:54.919495   66919 cri.go:89] found id: ""
	I0815 01:31:54.919529   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.919540   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:54.919558   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:54.919574   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:54.973177   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:54.973213   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:54.986864   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:54.986899   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:55.052637   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:55.052685   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:55.052700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:55.133149   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:55.133180   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:57.672833   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:57.686035   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:57.686099   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:57.718612   66919 cri.go:89] found id: ""
	I0815 01:31:57.718641   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.718653   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:57.718661   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:57.718738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:57.752763   66919 cri.go:89] found id: ""
	I0815 01:31:57.752781   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.752788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:57.752793   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:57.752840   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:57.785667   66919 cri.go:89] found id: ""
	I0815 01:31:57.785697   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.785709   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:57.785716   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:57.785776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:57.818775   66919 cri.go:89] found id: ""
	I0815 01:31:57.818804   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.818813   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:57.818821   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:57.818881   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:57.853766   66919 cri.go:89] found id: ""
	I0815 01:31:57.853798   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.853809   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:57.853815   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:57.853880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:57.886354   66919 cri.go:89] found id: ""
	I0815 01:31:57.886379   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.886386   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:57.886392   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:57.886453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:57.920522   66919 cri.go:89] found id: ""
	I0815 01:31:57.920553   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.920576   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:57.920583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:57.920648   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:57.952487   66919 cri.go:89] found id: ""
	I0815 01:31:57.952511   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.952520   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:57.952528   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:57.952541   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:58.003026   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:58.003064   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:58.016516   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:58.016544   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:58.091434   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:58.091459   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:58.091500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:58.170038   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:58.170073   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:00.709797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:00.724086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:00.724162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:00.756025   66919 cri.go:89] found id: ""
	I0815 01:32:00.756056   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.756066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:00.756073   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:00.756130   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:00.787831   66919 cri.go:89] found id: ""
	I0815 01:32:00.787858   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.787870   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:00.787880   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:00.787940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:00.821605   66919 cri.go:89] found id: ""
	I0815 01:32:00.821637   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.821644   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:00.821649   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:00.821697   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:00.852708   66919 cri.go:89] found id: ""
	I0815 01:32:00.852732   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.852739   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:00.852745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:00.852790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:00.885392   66919 cri.go:89] found id: ""
	I0815 01:32:00.885426   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.885437   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:00.885446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:00.885506   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:00.916715   66919 cri.go:89] found id: ""
	I0815 01:32:00.916751   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.916763   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:00.916771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:00.916890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:00.949028   66919 cri.go:89] found id: ""
	I0815 01:32:00.949058   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.949069   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:00.949076   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:00.949137   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:00.986364   66919 cri.go:89] found id: ""
	I0815 01:32:00.986399   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.986409   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:00.986419   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:00.986433   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.036475   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:01.036517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:01.049711   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:01.049746   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:01.117283   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:01.117310   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:01.117328   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:01.195453   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:01.195492   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:03.732372   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:03.745944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:03.746005   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:03.780527   66919 cri.go:89] found id: ""
	I0815 01:32:03.780566   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.780578   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:03.780586   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:03.780647   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:03.814147   66919 cri.go:89] found id: ""
	I0815 01:32:03.814170   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.814177   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:03.814184   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:03.814267   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:03.847375   66919 cri.go:89] found id: ""
	I0815 01:32:03.847409   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.847422   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:03.847429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:03.847497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:03.882859   66919 cri.go:89] found id: ""
	I0815 01:32:03.882887   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.882897   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:03.882904   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:03.882972   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:03.916490   66919 cri.go:89] found id: ""
	I0815 01:32:03.916520   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.916528   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:03.916544   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:03.916613   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:03.954789   66919 cri.go:89] found id: ""
	I0815 01:32:03.954819   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.954836   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:03.954844   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:03.954907   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:03.987723   66919 cri.go:89] found id: ""
	I0815 01:32:03.987748   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.987756   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:03.987761   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:03.987810   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:04.020948   66919 cri.go:89] found id: ""
	I0815 01:32:04.020974   66919 logs.go:276] 0 containers: []
	W0815 01:32:04.020981   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:04.020990   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:04.021008   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:04.033466   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:04.033489   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:04.097962   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:04.097989   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:04.098006   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:04.174672   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:04.174706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:04.216198   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:04.216228   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:06.768102   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:06.782370   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:06.782473   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:06.815958   66919 cri.go:89] found id: ""
	I0815 01:32:06.815983   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.815992   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:06.815999   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:06.816059   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:06.848701   66919 cri.go:89] found id: ""
	I0815 01:32:06.848735   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.848748   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:06.848756   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:06.848821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:06.879506   66919 cri.go:89] found id: ""
	I0815 01:32:06.879536   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.879544   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:06.879550   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:06.879607   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:06.915332   66919 cri.go:89] found id: ""
	I0815 01:32:06.915359   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.915371   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:06.915377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:06.915438   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:06.949424   66919 cri.go:89] found id: ""
	I0815 01:32:06.949454   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.949464   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:06.949471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:06.949518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:06.983713   66919 cri.go:89] found id: ""
	I0815 01:32:06.983739   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.983747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:06.983753   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:06.983816   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:07.016165   66919 cri.go:89] found id: ""
	I0815 01:32:07.016196   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.016207   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:07.016214   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:07.016271   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:07.048368   66919 cri.go:89] found id: ""
	I0815 01:32:07.048399   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.048410   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:07.048420   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:07.048435   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:07.100088   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:07.100128   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:07.113430   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:07.113459   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:07.178199   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:07.178223   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:07.178239   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:07.265089   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:07.265121   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:09.804733   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:09.819456   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:09.819530   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:09.850946   66919 cri.go:89] found id: ""
	I0815 01:32:09.850974   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.850981   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:09.850986   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:09.851043   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:09.888997   66919 cri.go:89] found id: ""
	I0815 01:32:09.889028   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.889039   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:09.889045   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:09.889105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:09.921455   66919 cri.go:89] found id: ""
	I0815 01:32:09.921490   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.921503   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:09.921511   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:09.921587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:09.957365   66919 cri.go:89] found id: ""
	I0815 01:32:09.957394   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.957410   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:09.957417   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:09.957477   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:09.988716   66919 cri.go:89] found id: ""
	I0815 01:32:09.988740   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.988753   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:09.988760   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:09.988823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:10.024121   66919 cri.go:89] found id: ""
	I0815 01:32:10.024148   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.024155   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:10.024160   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:10.024208   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:10.056210   66919 cri.go:89] found id: ""
	I0815 01:32:10.056237   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.056247   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:10.056253   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:10.056314   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:10.087519   66919 cri.go:89] found id: ""
	I0815 01:32:10.087551   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.087562   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:10.087574   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:10.087589   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:10.142406   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:10.142446   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:10.156134   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:10.156176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:10.230397   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:10.230419   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:10.230432   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:10.315187   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:10.315221   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:12.852055   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:12.864410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:12.864479   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:12.895777   66919 cri.go:89] found id: ""
	I0815 01:32:12.895811   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.895821   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:12.895831   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:12.895902   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:12.928135   66919 cri.go:89] found id: ""
	I0815 01:32:12.928161   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.928171   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:12.928178   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:12.928244   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:12.961837   66919 cri.go:89] found id: ""
	I0815 01:32:12.961867   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.961878   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:12.961885   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:12.961947   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:12.997899   66919 cri.go:89] found id: ""
	I0815 01:32:12.997928   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.997939   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:12.997946   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:12.998008   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:13.032686   66919 cri.go:89] found id: ""
	I0815 01:32:13.032716   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.032725   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:13.032730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:13.032783   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:13.064395   66919 cri.go:89] found id: ""
	I0815 01:32:13.064431   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.064444   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:13.064452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:13.064522   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:13.103618   66919 cri.go:89] found id: ""
	I0815 01:32:13.103646   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.103655   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:13.103661   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:13.103711   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:13.137650   66919 cri.go:89] found id: ""
	I0815 01:32:13.137684   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.137694   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:13.137702   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:13.137715   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:13.189803   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:13.189836   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:13.204059   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:13.204091   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:13.273702   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:13.273723   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:13.273735   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:13.358979   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:13.359037   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:15.899388   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:15.911944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:15.912013   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:15.946179   66919 cri.go:89] found id: ""
	I0815 01:32:15.946206   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.946215   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:15.946223   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:15.946284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:15.979700   66919 cri.go:89] found id: ""
	I0815 01:32:15.979725   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.979732   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:15.979738   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:15.979784   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:16.013003   66919 cri.go:89] found id: ""
	I0815 01:32:16.013033   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.013044   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:16.013056   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:16.013113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:16.044824   66919 cri.go:89] found id: ""
	I0815 01:32:16.044851   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.044861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:16.044868   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:16.044930   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:16.076193   66919 cri.go:89] found id: ""
	I0815 01:32:16.076219   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.076227   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:16.076232   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:16.076280   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:16.113747   66919 cri.go:89] found id: ""
	I0815 01:32:16.113775   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.113785   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:16.113795   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:16.113855   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:16.145504   66919 cri.go:89] found id: ""
	I0815 01:32:16.145547   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.145560   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:16.145568   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:16.145637   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:16.181581   66919 cri.go:89] found id: ""
	I0815 01:32:16.181613   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.181623   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:16.181634   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:16.181655   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:16.223644   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:16.223687   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:16.279096   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:16.279131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:16.292132   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:16.292161   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:16.360605   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:16.360624   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:16.360636   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:18.938884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:18.951884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:18.951966   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:18.989163   66919 cri.go:89] found id: ""
	I0815 01:32:18.989192   66919 logs.go:276] 0 containers: []
	W0815 01:32:18.989201   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:18.989206   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:18.989256   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:19.025915   66919 cri.go:89] found id: ""
	I0815 01:32:19.025943   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.025952   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:19.025960   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:19.026028   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:19.062863   66919 cri.go:89] found id: ""
	I0815 01:32:19.062889   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.062899   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:19.062907   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:19.062969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:19.099336   66919 cri.go:89] found id: ""
	I0815 01:32:19.099358   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.099369   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:19.099383   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:19.099442   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:19.130944   66919 cri.go:89] found id: ""
	I0815 01:32:19.130977   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.130988   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:19.130995   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:19.131056   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:19.161353   66919 cri.go:89] found id: ""
	I0815 01:32:19.161381   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.161391   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:19.161398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:19.161454   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:19.195867   66919 cri.go:89] found id: ""
	I0815 01:32:19.195902   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.195915   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:19.195923   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:19.195993   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:19.228851   66919 cri.go:89] found id: ""
	I0815 01:32:19.228886   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.228899   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:19.228919   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:19.228938   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:19.281284   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:19.281320   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:19.294742   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:19.294771   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:19.364684   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:19.364708   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:19.364722   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:19.451057   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:19.451092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:21.989302   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:22.002691   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:22.002755   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:22.037079   66919 cri.go:89] found id: ""
	I0815 01:32:22.037101   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.037109   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:22.037115   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:22.037162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:22.069804   66919 cri.go:89] found id: ""
	I0815 01:32:22.069833   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.069842   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:22.069848   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:22.069919   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.102474   66919 cri.go:89] found id: ""
	I0815 01:32:22.102503   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.102515   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:22.102523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:22.102587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:22.137416   66919 cri.go:89] found id: ""
	I0815 01:32:22.137442   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.137449   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:22.137454   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:22.137511   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:22.171153   66919 cri.go:89] found id: ""
	I0815 01:32:22.171182   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.171191   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:22.171198   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:22.171259   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:22.207991   66919 cri.go:89] found id: ""
	I0815 01:32:22.208020   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.208029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:22.208038   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:22.208111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:22.245727   66919 cri.go:89] found id: ""
	I0815 01:32:22.245757   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.245767   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:22.245774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:22.245838   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:22.284478   66919 cri.go:89] found id: ""
	I0815 01:32:22.284502   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.284510   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:22.284518   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:22.284529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:22.297334   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:22.297378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:22.369318   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:22.369342   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:22.369356   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:22.445189   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:22.445226   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:22.486563   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:22.486592   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.037875   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:25.051503   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:25.051580   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:25.090579   66919 cri.go:89] found id: ""
	I0815 01:32:25.090610   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.090622   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:25.090629   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:25.090691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:25.123683   66919 cri.go:89] found id: ""
	I0815 01:32:25.123711   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.123722   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:25.123729   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:25.123790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:25.155715   66919 cri.go:89] found id: ""
	I0815 01:32:25.155744   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.155752   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:25.155757   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:25.155806   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:25.186654   66919 cri.go:89] found id: ""
	I0815 01:32:25.186680   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.186688   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:25.186694   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:25.186741   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:25.218636   66919 cri.go:89] found id: ""
	I0815 01:32:25.218665   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.218674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:25.218679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:25.218729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:25.250018   66919 cri.go:89] found id: ""
	I0815 01:32:25.250046   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.250116   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:25.250147   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:25.250219   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:25.283374   66919 cri.go:89] found id: ""
	I0815 01:32:25.283403   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.283413   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:25.283420   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:25.283483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:25.315240   66919 cri.go:89] found id: ""
	I0815 01:32:25.315260   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.315267   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:25.315274   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:25.315286   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.367212   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:25.367243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:25.380506   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:25.380531   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:25.441106   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:25.441129   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:25.441145   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:25.522791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:25.522828   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.061984   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:28.075091   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:28.075149   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:28.110375   66919 cri.go:89] found id: ""
	I0815 01:32:28.110407   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.110419   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:28.110426   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:28.110490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:28.146220   66919 cri.go:89] found id: ""
	I0815 01:32:28.146249   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.146258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:28.146264   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:28.146317   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:28.177659   66919 cri.go:89] found id: ""
	I0815 01:32:28.177691   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.177702   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:28.177708   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:28.177776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:28.209729   66919 cri.go:89] found id: ""
	I0815 01:32:28.209759   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.209768   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:28.209775   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:28.209835   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:28.241605   66919 cri.go:89] found id: ""
	I0815 01:32:28.241633   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.241642   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:28.241646   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:28.241706   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:28.276697   66919 cri.go:89] found id: ""
	I0815 01:32:28.276722   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.276730   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:28.276735   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:28.276785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:28.309109   66919 cri.go:89] found id: ""
	I0815 01:32:28.309134   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.309144   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:28.309151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:28.309213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:28.348262   66919 cri.go:89] found id: ""
	I0815 01:32:28.348289   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.348303   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:28.348315   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:28.348329   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.387270   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:28.387296   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:28.440454   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:28.440504   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:28.453203   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:28.453233   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:28.523080   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:28.523106   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:28.523123   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:31.098144   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:31.111396   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:31.111469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:31.143940   66919 cri.go:89] found id: ""
	I0815 01:32:31.143969   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.143977   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:31.143983   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:31.144038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:31.175393   66919 cri.go:89] found id: ""
	I0815 01:32:31.175421   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.175439   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:31.175447   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:31.175509   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:31.213955   66919 cri.go:89] found id: ""
	I0815 01:32:31.213984   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.213993   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:31.213998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:31.214047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:31.245836   66919 cri.go:89] found id: ""
	I0815 01:32:31.245861   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.245868   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:31.245873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:31.245936   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:31.279290   66919 cri.go:89] found id: ""
	I0815 01:32:31.279317   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.279327   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:31.279334   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:31.279408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:31.313898   66919 cri.go:89] found id: ""
	I0815 01:32:31.313926   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.313937   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:31.313944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:31.314020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:31.344466   66919 cri.go:89] found id: ""
	I0815 01:32:31.344502   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.344513   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:31.344521   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:31.344586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:31.375680   66919 cri.go:89] found id: ""
	I0815 01:32:31.375709   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.375721   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:31.375732   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:31.375747   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:31.457005   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:31.457048   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.494656   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:31.494691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:31.546059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:31.546096   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:31.559523   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:31.559553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:31.628402   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.128980   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:34.142151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:34.142216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:34.189425   66919 cri.go:89] found id: ""
	I0815 01:32:34.189453   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.189464   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:34.189470   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:34.189533   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:34.222360   66919 cri.go:89] found id: ""
	I0815 01:32:34.222385   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.222392   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:34.222398   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:34.222453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:34.256275   66919 cri.go:89] found id: ""
	I0815 01:32:34.256302   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.256314   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:34.256322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:34.256387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:34.294104   66919 cri.go:89] found id: ""
	I0815 01:32:34.294130   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.294137   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:34.294143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:34.294214   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:34.330163   66919 cri.go:89] found id: ""
	I0815 01:32:34.330193   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.330205   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:34.330213   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:34.330278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:34.363436   66919 cri.go:89] found id: ""
	I0815 01:32:34.363464   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.363475   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:34.363483   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:34.363540   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:34.399733   66919 cri.go:89] found id: ""
	I0815 01:32:34.399761   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.399772   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:34.399779   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:34.399832   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:34.433574   66919 cri.go:89] found id: ""
	I0815 01:32:34.433781   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.433804   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:34.433820   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:34.433839   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:34.488449   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:34.488496   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:34.502743   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:34.502776   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:34.565666   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.565701   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:34.565718   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:34.639463   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:34.639498   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:37.189617   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:37.202695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:37.202766   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:37.235556   66919 cri.go:89] found id: ""
	I0815 01:32:37.235589   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.235600   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:37.235608   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:37.235669   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:37.271110   66919 cri.go:89] found id: ""
	I0815 01:32:37.271139   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.271150   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:37.271158   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:37.271216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:37.304294   66919 cri.go:89] found id: ""
	I0815 01:32:37.304325   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.304332   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:37.304337   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:37.304398   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:37.337271   66919 cri.go:89] found id: ""
	I0815 01:32:37.337297   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.337309   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:37.337317   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:37.337377   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:37.373088   66919 cri.go:89] found id: ""
	I0815 01:32:37.373115   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.373126   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:37.373133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:37.373184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:37.407978   66919 cri.go:89] found id: ""
	I0815 01:32:37.408003   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.408011   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:37.408016   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:37.408065   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:37.441966   66919 cri.go:89] found id: ""
	I0815 01:32:37.441999   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.442009   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:37.442017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:37.442079   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:37.473670   66919 cri.go:89] found id: ""
	I0815 01:32:37.473699   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.473710   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:37.473720   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:37.473740   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:37.509174   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:37.509208   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:37.560059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:37.560099   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:37.574425   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:37.574458   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:37.639177   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:37.639199   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:37.639216   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:40.218504   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:40.231523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:40.231626   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:40.266065   66919 cri.go:89] found id: ""
	I0815 01:32:40.266092   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.266102   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:40.266109   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:40.266174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:40.298717   66919 cri.go:89] found id: ""
	I0815 01:32:40.298749   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.298759   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:40.298767   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:40.298821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:40.330633   66919 cri.go:89] found id: ""
	I0815 01:32:40.330660   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.330668   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:40.330674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:40.330738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:40.367840   66919 cri.go:89] found id: ""
	I0815 01:32:40.367866   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.367876   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:40.367884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:40.367953   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:40.403883   66919 cri.go:89] found id: ""
	I0815 01:32:40.403910   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.403921   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:40.403927   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:40.404001   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:40.433989   66919 cri.go:89] found id: ""
	I0815 01:32:40.434016   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.434029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:40.434036   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:40.434098   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:40.468173   66919 cri.go:89] found id: ""
	I0815 01:32:40.468202   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.468213   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:40.468220   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:40.468278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:40.502701   66919 cri.go:89] found id: ""
	I0815 01:32:40.502726   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.502737   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:40.502748   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:40.502772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:40.582716   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:40.582751   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:40.582766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:40.663875   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:40.663910   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.710394   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:40.710439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:40.763015   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:40.763044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.276542   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:43.289311   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:43.289375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:43.334368   66919 cri.go:89] found id: ""
	I0815 01:32:43.334398   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.334408   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:43.334416   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:43.334480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:43.367778   66919 cri.go:89] found id: ""
	I0815 01:32:43.367810   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.367821   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:43.367829   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:43.367890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:43.408036   66919 cri.go:89] found id: ""
	I0815 01:32:43.408060   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.408067   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:43.408072   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:43.408126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:43.442240   66919 cri.go:89] found id: ""
	I0815 01:32:43.442264   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.442276   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:43.442282   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:43.442366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:43.475071   66919 cri.go:89] found id: ""
	I0815 01:32:43.475103   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.475113   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:43.475123   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:43.475189   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:43.508497   66919 cri.go:89] found id: ""
	I0815 01:32:43.508526   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.508536   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:43.508543   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:43.508601   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:43.544292   66919 cri.go:89] found id: ""
	I0815 01:32:43.544315   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.544322   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:43.544328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:43.544390   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:43.582516   66919 cri.go:89] found id: ""
	I0815 01:32:43.582544   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.582556   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:43.582567   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:43.582583   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:43.633821   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:43.633853   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.647453   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:43.647478   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:43.715818   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:43.715839   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:43.715850   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:43.798131   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:43.798167   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:46.337867   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:46.364553   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:46.364629   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:46.426611   66919 cri.go:89] found id: ""
	I0815 01:32:46.426642   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.426654   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:46.426662   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:46.426724   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:46.461160   66919 cri.go:89] found id: ""
	I0815 01:32:46.461194   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.461201   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:46.461206   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:46.461262   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:46.492542   66919 cri.go:89] found id: ""
	I0815 01:32:46.492566   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.492576   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:46.492583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:46.492643   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:46.526035   66919 cri.go:89] found id: ""
	I0815 01:32:46.526060   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.526068   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:46.526075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:46.526131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:46.558867   66919 cri.go:89] found id: ""
	I0815 01:32:46.558895   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.558903   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:46.558909   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:46.558969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:46.593215   66919 cri.go:89] found id: ""
	I0815 01:32:46.593243   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.593258   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:46.593264   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:46.593345   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:46.626683   66919 cri.go:89] found id: ""
	I0815 01:32:46.626710   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.626720   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:46.626727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:46.626786   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:46.660687   66919 cri.go:89] found id: ""
	I0815 01:32:46.660716   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.660727   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:46.660738   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:46.660754   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:46.710639   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:46.710670   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:46.723378   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:46.723402   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:46.790906   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:46.790931   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:46.790946   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:46.876843   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:46.876877   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:49.421563   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:49.434606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:49.434688   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:49.468855   66919 cri.go:89] found id: ""
	I0815 01:32:49.468884   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.468895   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:49.468900   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:49.468958   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:49.507477   66919 cri.go:89] found id: ""
	I0815 01:32:49.507507   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.507519   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:49.507526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:49.507586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:49.539825   66919 cri.go:89] found id: ""
	I0815 01:32:49.539855   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.539866   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:49.539873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:49.539925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:49.570812   66919 cri.go:89] found id: ""
	I0815 01:32:49.570841   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.570851   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:49.570858   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:49.570910   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:49.604327   66919 cri.go:89] found id: ""
	I0815 01:32:49.604356   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.604367   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:49.604374   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:49.604456   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:49.640997   66919 cri.go:89] found id: ""
	I0815 01:32:49.641029   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.641042   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:49.641051   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:49.641116   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:49.673274   66919 cri.go:89] found id: ""
	I0815 01:32:49.673303   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.673314   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:49.673322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:49.673381   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:49.708863   66919 cri.go:89] found id: ""
	I0815 01:32:49.708890   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.708897   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:49.708905   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:49.708916   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:49.759404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:49.759431   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:49.773401   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:49.773429   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:49.842512   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:49.842539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:49.842557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:49.923996   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:49.924030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:52.459672   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:52.472149   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:52.472218   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:52.508168   66919 cri.go:89] found id: ""
	I0815 01:32:52.508193   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.508202   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:52.508207   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:52.508260   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:52.543741   66919 cri.go:89] found id: ""
	I0815 01:32:52.543770   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.543788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:52.543796   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:52.543850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:52.575833   66919 cri.go:89] found id: ""
	I0815 01:32:52.575865   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.575876   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:52.575883   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:52.575950   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:52.607593   66919 cri.go:89] found id: ""
	I0815 01:32:52.607627   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.607638   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:52.607645   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:52.607705   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:52.641726   66919 cri.go:89] found id: ""
	I0815 01:32:52.641748   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.641757   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:52.641763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:52.641820   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:52.673891   66919 cri.go:89] found id: ""
	I0815 01:32:52.673918   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.673926   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:52.673932   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:52.673989   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:52.705405   66919 cri.go:89] found id: ""
	I0815 01:32:52.705465   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.705479   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:52.705488   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:52.705683   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:52.739413   66919 cri.go:89] found id: ""
	I0815 01:32:52.739442   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.739455   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:52.739466   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:52.739481   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:52.791891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:52.791926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:52.806154   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:52.806184   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:52.871807   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:52.871833   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:52.871848   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:52.955257   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:52.955299   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:55.498326   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:55.511596   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:55.511674   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:55.545372   66919 cri.go:89] found id: ""
	I0815 01:32:55.545397   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.545405   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:55.545410   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:55.545469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:55.578661   66919 cri.go:89] found id: ""
	I0815 01:32:55.578687   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.578699   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:55.578706   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:55.578774   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:55.612071   66919 cri.go:89] found id: ""
	I0815 01:32:55.612096   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.612104   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:55.612109   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:55.612167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:55.647842   66919 cri.go:89] found id: ""
	I0815 01:32:55.647870   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.647879   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:55.647884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:55.647946   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:55.683145   66919 cri.go:89] found id: ""
	I0815 01:32:55.683171   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.683179   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:55.683185   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:55.683237   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:55.716485   66919 cri.go:89] found id: ""
	I0815 01:32:55.716513   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.716524   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:55.716529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:55.716588   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:55.751649   66919 cri.go:89] found id: ""
	I0815 01:32:55.751673   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.751681   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:55.751689   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:55.751748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:55.786292   66919 cri.go:89] found id: ""
	I0815 01:32:55.786322   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.786333   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:55.786345   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:55.786362   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:55.837633   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:55.837680   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:55.851624   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:55.851697   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:55.920496   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:55.920518   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:55.920532   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:55.998663   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:55.998700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:58.538202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:58.550630   66919 kubeadm.go:597] duration metric: took 4m4.454171061s to restartPrimaryControlPlane
	W0815 01:32:58.550719   66919 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:32:58.550763   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:33:02.968200   66919 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.417406165s)
	I0815 01:33:02.968273   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:02.984328   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:02.994147   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:03.003703   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:03.003745   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:03.003799   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:03.012560   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:03.012629   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:03.021480   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:03.030121   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:03.030185   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:03.039216   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.047790   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:03.047854   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.056508   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:03.065001   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:03.065059   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:03.073818   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:03.286102   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:59.563745   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:34:59.563904   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:34:59.565631   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:34:59.565711   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:59.565827   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:59.565968   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:59.566095   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:34:59.566195   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:59.567850   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:59.567922   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:59.567991   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:59.568091   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:59.568176   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:59.568283   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:59.568377   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:59.568466   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:59.568558   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:59.568674   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:59.568775   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:59.568834   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:59.568920   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:59.568998   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:59.569073   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:59.569162   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:59.569217   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:59.569330   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:59.569429   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:59.569482   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:59.569580   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:59.571031   66919 out.go:204]   - Booting up control plane ...
	I0815 01:34:59.571120   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:59.571198   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:59.571286   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:59.571396   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:59.571643   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:34:59.571729   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:34:59.571830   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572069   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572172   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572422   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572540   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572814   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572913   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573155   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573252   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573474   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573484   66919 kubeadm.go:310] 
	I0815 01:34:59.573543   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:34:59.573601   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:34:59.573610   66919 kubeadm.go:310] 
	I0815 01:34:59.573667   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:34:59.573713   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:34:59.573862   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:34:59.573878   66919 kubeadm.go:310] 
	I0815 01:34:59.574000   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:34:59.574051   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:34:59.574099   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:34:59.574109   66919 kubeadm.go:310] 
	I0815 01:34:59.574262   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:34:59.574379   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:34:59.574387   66919 kubeadm.go:310] 
	I0815 01:34:59.574509   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:34:59.574646   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:34:59.574760   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:34:59.574862   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:34:59.574880   66919 kubeadm.go:310] 
	W0815 01:34:59.574991   66919 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 01:34:59.575044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:35:00.029701   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:00.047125   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:35:00.057309   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:35:00.057336   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:35:00.057396   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:35:00.066837   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:35:00.066901   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:35:00.076722   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:35:00.086798   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:35:00.086862   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:35:00.097486   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.109900   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:35:00.109981   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.122672   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:35:00.134512   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:35:00.134579   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:35:00.146901   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:35:00.384725   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:36:56.608471   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:36:56.608611   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:36:56.610133   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:36:56.610200   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:36:56.610290   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:36:56.610405   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:36:56.610524   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:36:56.610616   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:36:56.612092   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:36:56.612184   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:36:56.612246   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:36:56.612314   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:36:56.612371   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:36:56.612431   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:36:56.612482   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:36:56.612534   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:36:56.612585   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:36:56.612697   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:36:56.612796   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:36:56.612859   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:36:56.613044   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:36:56.613112   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:36:56.613157   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:36:56.613244   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:36:56.613322   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:36:56.613455   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:36:56.613565   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:36:56.613631   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:36:56.613729   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:36:56.615023   66919 out.go:204]   - Booting up control plane ...
	I0815 01:36:56.615129   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:36:56.615203   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:36:56.615260   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:36:56.615330   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:36:56.615485   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:36:56.615542   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:36:56.615620   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.615805   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.615892   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616085   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616149   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616297   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616355   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616555   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616646   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616833   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616842   66919 kubeadm.go:310] 
	I0815 01:36:56.616873   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:36:56.616905   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:36:56.616912   66919 kubeadm.go:310] 
	I0815 01:36:56.616939   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:36:56.616969   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:36:56.617073   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:36:56.617089   66919 kubeadm.go:310] 
	I0815 01:36:56.617192   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:36:56.617220   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:36:56.617255   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:36:56.617263   66919 kubeadm.go:310] 
	I0815 01:36:56.617393   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:36:56.617469   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:36:56.617478   66919 kubeadm.go:310] 
	I0815 01:36:56.617756   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:36:56.617889   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:36:56.617967   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:36:56.618057   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:36:56.618070   66919 kubeadm.go:310] 
	I0815 01:36:56.618125   66919 kubeadm.go:394] duration metric: took 8m2.571608887s to StartCluster
	I0815 01:36:56.618169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:36:56.618222   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:36:56.659324   66919 cri.go:89] found id: ""
	I0815 01:36:56.659353   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.659365   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:36:56.659372   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:36:56.659443   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:36:56.695979   66919 cri.go:89] found id: ""
	I0815 01:36:56.696003   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.696010   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:36:56.696015   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:36:56.696063   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:36:56.730063   66919 cri.go:89] found id: ""
	I0815 01:36:56.730092   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.730100   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:36:56.730106   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:36:56.730161   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:36:56.763944   66919 cri.go:89] found id: ""
	I0815 01:36:56.763969   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.763983   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:36:56.763988   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:36:56.764047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:36:56.798270   66919 cri.go:89] found id: ""
	I0815 01:36:56.798299   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.798307   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:36:56.798313   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:36:56.798366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:36:56.832286   66919 cri.go:89] found id: ""
	I0815 01:36:56.832318   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.832328   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:36:56.832335   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:36:56.832410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:36:56.866344   66919 cri.go:89] found id: ""
	I0815 01:36:56.866380   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.866390   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:36:56.866398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:36:56.866461   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:36:56.904339   66919 cri.go:89] found id: ""
	I0815 01:36:56.904366   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.904375   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:36:56.904387   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:36:56.904405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:36:56.982024   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:36:56.982045   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:36:56.982057   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:36:57.092250   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:36:57.092288   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:36:57.157548   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:36:57.157582   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:36:57.216511   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:36:57.216563   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 01:36:57.230210   66919 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 01:36:57.230256   66919 out.go:239] * 
	* 
	W0815 01:36:57.230316   66919 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.230347   66919 out.go:239] * 
	* 
	W0815 01:36:57.231157   66919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:36:57.234003   66919 out.go:177] 
	W0815 01:36:57.235088   66919 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.235127   66919 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 01:36:57.235146   66919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 01:36:57.236647   66919 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-390782 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 2 (233.726581ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-390782 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-390782 logs -n 25: (1.623138313s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:20 UTC |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-884893             | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-294760 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | disable-driver-mounts-294760                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:23 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-190398            | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-390782        | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018537  | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-884893                  | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-190398                 | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-390782             | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018537       | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:26 UTC | 15 Aug 24 01:34 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:26:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:26:05.128952   67451 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:26:05.129201   67451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:26:05.129210   67451 out.go:304] Setting ErrFile to fd 2...
	I0815 01:26:05.129214   67451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:26:05.129371   67451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:26:05.129877   67451 out.go:298] Setting JSON to false
	I0815 01:26:05.130775   67451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7710,"bootTime":1723677455,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:26:05.130828   67451 start.go:139] virtualization: kvm guest
	I0815 01:26:05.133200   67451 out.go:177] * [default-k8s-diff-port-018537] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:26:05.134520   67451 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:26:05.134534   67451 notify.go:220] Checking for updates...
	I0815 01:26:05.136725   67451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:26:05.137871   67451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:26:05.138973   67451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:26:05.140126   67451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:26:05.141168   67451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:26:05.142477   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:26:05.142872   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:26:05.142931   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:26:05.157398   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0815 01:26:05.157792   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:26:05.158237   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:26:05.158271   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:26:05.158625   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:26:05.158791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:26:05.158998   67451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:26:05.159268   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:26:05.159298   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:26:05.173332   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0815 01:26:05.173671   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:26:05.174063   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:26:05.174085   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:26:05.174378   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:26:05.174558   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:26:05.209931   67451 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 01:26:04.417005   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:05.210993   67451 start.go:297] selected driver: kvm2
	I0815 01:26:05.211005   67451 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:05.211106   67451 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:26:05.211778   67451 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:26:05.211854   67451 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:26:05.226770   67451 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:26:05.227141   67451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:26:05.227174   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:26:05.227182   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:26:05.227228   67451 start.go:340] cluster config:
	{Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:05.227335   67451 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:26:05.228866   67451 out.go:177] * Starting "default-k8s-diff-port-018537" primary control-plane node in "default-k8s-diff-port-018537" cluster
	I0815 01:26:05.229784   67451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:26:05.229818   67451 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:26:05.229826   67451 cache.go:56] Caching tarball of preloaded images
	I0815 01:26:05.229905   67451 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:26:05.229916   67451 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:26:05.230017   67451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:26:05.230223   67451 start.go:360] acquireMachinesLock for default-k8s-diff-port-018537: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:26:07.488887   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:13.568939   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:16.640954   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:22.720929   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:25.792889   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:31.872926   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:34.944895   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:41.024886   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:44.096913   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:50.176957   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:53.249017   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:59.328928   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:02.400891   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:08.480935   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:11.552904   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:17.632939   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:20.704876   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:26.784922   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:29.856958   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:35.936895   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:39.008957   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:45.088962   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:48.160964   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:54.240971   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:57.312935   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:03.393014   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:06.464973   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:12.544928   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:15.616915   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:21.696904   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:24.768924   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:27.773197   66919 start.go:364] duration metric: took 3m57.538488178s to acquireMachinesLock for "old-k8s-version-390782"
	I0815 01:28:27.773249   66919 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:27.773269   66919 fix.go:54] fixHost starting: 
	I0815 01:28:27.773597   66919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:27.773632   66919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:27.788757   66919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0815 01:28:27.789155   66919 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:27.789612   66919 main.go:141] libmachine: Using API Version  1
	I0815 01:28:27.789645   66919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:27.789952   66919 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:27.790122   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:27.790265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetState
	I0815 01:28:27.791742   66919 fix.go:112] recreateIfNeeded on old-k8s-version-390782: state=Stopped err=<nil>
	I0815 01:28:27.791773   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	W0815 01:28:27.791930   66919 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:27.793654   66919 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-390782" ...
	I0815 01:28:27.794650   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .Start
	I0815 01:28:27.794798   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring networks are active...
	I0815 01:28:27.795554   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network default is active
	I0815 01:28:27.795835   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network mk-old-k8s-version-390782 is active
	I0815 01:28:27.796194   66919 main.go:141] libmachine: (old-k8s-version-390782) Getting domain xml...
	I0815 01:28:27.797069   66919 main.go:141] libmachine: (old-k8s-version-390782) Creating domain...
	I0815 01:28:28.999562   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting to get IP...
	I0815 01:28:29.000288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.000697   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.000787   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.000698   67979 retry.go:31] will retry after 209.337031ms: waiting for machine to come up
	I0815 01:28:29.212345   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.212839   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.212865   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.212796   67979 retry.go:31] will retry after 252.542067ms: waiting for machine to come up
	I0815 01:28:29.467274   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.467659   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.467685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.467607   67979 retry.go:31] will retry after 412.932146ms: waiting for machine to come up
	I0815 01:28:29.882217   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.882643   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.882672   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.882601   67979 retry.go:31] will retry after 526.991017ms: waiting for machine to come up
	I0815 01:28:27.770766   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:27.770800   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:28:27.771142   66492 buildroot.go:166] provisioning hostname "no-preload-884893"
	I0815 01:28:27.771173   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:28:27.771381   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:28:27.773059   66492 machine.go:97] duration metric: took 4m37.432079731s to provisionDockerMachine
	I0815 01:28:27.773102   66492 fix.go:56] duration metric: took 4m37.453608342s for fixHost
	I0815 01:28:27.773107   66492 start.go:83] releasing machines lock for "no-preload-884893", held for 4m37.453640668s
	W0815 01:28:27.773125   66492 start.go:714] error starting host: provision: host is not running
	W0815 01:28:27.773209   66492 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0815 01:28:27.773219   66492 start.go:729] Will try again in 5 seconds ...
	I0815 01:28:30.411443   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:30.411819   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:30.411881   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:30.411794   67979 retry.go:31] will retry after 758.953861ms: waiting for machine to come up
	I0815 01:28:31.172721   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.173099   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.173131   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.173045   67979 retry.go:31] will retry after 607.740613ms: waiting for machine to come up
	I0815 01:28:31.782922   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.783406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.783434   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.783343   67979 retry.go:31] will retry after 738.160606ms: waiting for machine to come up
	I0815 01:28:32.523257   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:32.523685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:32.523716   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:32.523625   67979 retry.go:31] will retry after 904.54249ms: waiting for machine to come up
	I0815 01:28:33.430286   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:33.430690   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:33.430722   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:33.430637   67979 retry.go:31] will retry after 1.55058959s: waiting for machine to come up
	I0815 01:28:34.983386   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:34.983838   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:34.983870   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:34.983788   67979 retry.go:31] will retry after 1.636768205s: waiting for machine to come up
	I0815 01:28:32.775084   66492 start.go:360] acquireMachinesLock for no-preload-884893: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:28:36.622595   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:36.623058   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:36.623083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:36.622994   67979 retry.go:31] will retry after 1.777197126s: waiting for machine to come up
	I0815 01:28:38.401812   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:38.402289   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:38.402319   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:38.402247   67979 retry.go:31] will retry after 3.186960364s: waiting for machine to come up
	I0815 01:28:41.592635   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:41.593067   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:41.593093   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:41.593018   67979 retry.go:31] will retry after 3.613524245s: waiting for machine to come up
	I0815 01:28:46.469326   67000 start.go:364] duration metric: took 4m10.840663216s to acquireMachinesLock for "embed-certs-190398"
	I0815 01:28:46.469405   67000 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:46.469425   67000 fix.go:54] fixHost starting: 
	I0815 01:28:46.469913   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:46.469951   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:46.486446   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0815 01:28:46.486871   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:46.487456   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:28:46.487491   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:46.487832   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:46.488037   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:28:46.488198   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:28:46.489804   67000 fix.go:112] recreateIfNeeded on embed-certs-190398: state=Stopped err=<nil>
	I0815 01:28:46.489863   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	W0815 01:28:46.490033   67000 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:46.492240   67000 out.go:177] * Restarting existing kvm2 VM for "embed-certs-190398" ...
	I0815 01:28:45.209122   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.209617   66919 main.go:141] libmachine: (old-k8s-version-390782) Found IP for machine: 192.168.50.21
	I0815 01:28:45.209639   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserving static IP address...
	I0815 01:28:45.209657   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has current primary IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.210115   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserved static IP address: 192.168.50.21
	I0815 01:28:45.210138   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting for SSH to be available...
	I0815 01:28:45.210160   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.210188   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | skip adding static IP to network mk-old-k8s-version-390782 - found existing host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"}
	I0815 01:28:45.210204   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Getting to WaitForSSH function...
	I0815 01:28:45.212727   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213127   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.213153   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213307   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH client type: external
	I0815 01:28:45.213354   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa (-rw-------)
	I0815 01:28:45.213388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:28:45.213406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | About to run SSH command:
	I0815 01:28:45.213437   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | exit 0
	I0815 01:28:45.340616   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | SSH cmd err, output: <nil>: 
	I0815 01:28:45.341118   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetConfigRaw
	I0815 01:28:45.341848   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.344534   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.344934   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.344967   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.345196   66919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/config.json ...
	I0815 01:28:45.345414   66919 machine.go:94] provisionDockerMachine start ...
	I0815 01:28:45.345433   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:45.345699   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.347935   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348249   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.348278   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348438   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.348609   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348797   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348957   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.349117   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.349324   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.349337   66919 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:28:45.456668   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:28:45.456701   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.456959   66919 buildroot.go:166] provisioning hostname "old-k8s-version-390782"
	I0815 01:28:45.456987   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.457148   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.460083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460425   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.460453   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460613   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.460783   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.460924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.461039   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.461180   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.461392   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.461416   66919 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-390782 && echo "old-k8s-version-390782" | sudo tee /etc/hostname
	I0815 01:28:45.582108   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-390782
	
	I0815 01:28:45.582136   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.585173   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585556   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.585590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585795   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.585989   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586131   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586253   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.586445   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.586648   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.586667   66919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-390782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-390782/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-390782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:28:45.700737   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:45.700778   66919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:28:45.700802   66919 buildroot.go:174] setting up certificates
	I0815 01:28:45.700812   66919 provision.go:84] configureAuth start
	I0815 01:28:45.700821   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.701079   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.704006   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704384   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.704416   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704593   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.706737   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707018   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.707041   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707213   66919 provision.go:143] copyHostCerts
	I0815 01:28:45.707299   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:28:45.707324   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:28:45.707408   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:28:45.707528   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:28:45.707537   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:28:45.707576   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:28:45.707657   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:28:45.707666   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:28:45.707701   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:28:45.707771   66919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-390782 san=[127.0.0.1 192.168.50.21 localhost minikube old-k8s-version-390782]
	I0815 01:28:45.787190   66919 provision.go:177] copyRemoteCerts
	I0815 01:28:45.787256   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:28:45.787287   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.790159   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790542   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.790590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790735   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.790924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.791097   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.791217   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:45.874561   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:28:45.897869   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 01:28:45.923862   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:28:45.950038   66919 provision.go:87] duration metric: took 249.211016ms to configureAuth
	I0815 01:28:45.950065   66919 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:28:45.950301   66919 config.go:182] Loaded profile config "old-k8s-version-390782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:28:45.950412   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.953288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953746   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.953778   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953902   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.954098   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954358   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954569   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.954784   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.954953   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.954967   66919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:28:46.228321   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:28:46.228349   66919 machine.go:97] duration metric: took 882.921736ms to provisionDockerMachine
	I0815 01:28:46.228363   66919 start.go:293] postStartSetup for "old-k8s-version-390782" (driver="kvm2")
	I0815 01:28:46.228375   66919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:28:46.228401   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.228739   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:28:46.228774   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.231605   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.231993   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.232020   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.232216   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.232419   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.232698   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.232919   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.319433   66919 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:28:46.323340   66919 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:28:46.323373   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:28:46.323451   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:28:46.323555   66919 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:28:46.323658   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:28:46.332594   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:46.354889   66919 start.go:296] duration metric: took 126.511194ms for postStartSetup
	I0815 01:28:46.354930   66919 fix.go:56] duration metric: took 18.581671847s for fixHost
	I0815 01:28:46.354950   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.357987   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358251   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.358277   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358509   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.358747   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.358934   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.359092   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.359240   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:46.359425   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:46.359438   66919 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:28:46.469167   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685326.429908383
	
	I0815 01:28:46.469192   66919 fix.go:216] guest clock: 1723685326.429908383
	I0815 01:28:46.469202   66919 fix.go:229] Guest: 2024-08-15 01:28:46.429908383 +0000 UTC Remote: 2024-08-15 01:28:46.354934297 +0000 UTC m=+256.257437765 (delta=74.974086ms)
	I0815 01:28:46.469231   66919 fix.go:200] guest clock delta is within tolerance: 74.974086ms
	I0815 01:28:46.469236   66919 start.go:83] releasing machines lock for "old-k8s-version-390782", held for 18.696013068s
	I0815 01:28:46.469264   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.469527   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:46.472630   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473053   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.473082   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473746   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473931   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473998   66919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:28:46.474048   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.474159   66919 ssh_runner.go:195] Run: cat /version.json
	I0815 01:28:46.474188   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.476984   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477012   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477421   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477445   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477465   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477499   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477615   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477719   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477784   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477845   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477907   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477975   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.478048   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.585745   66919 ssh_runner.go:195] Run: systemctl --version
	I0815 01:28:46.592135   66919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:28:46.731888   66919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:28:46.739171   66919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:28:46.739238   66919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:28:46.760211   66919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:28:46.760232   66919 start.go:495] detecting cgroup driver to use...
	I0815 01:28:46.760316   66919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:28:46.778483   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:28:46.791543   66919 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:28:46.791632   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:28:46.804723   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:28:46.818794   66919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:28:46.931242   66919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:28:47.091098   66919 docker.go:233] disabling docker service ...
	I0815 01:28:47.091177   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:28:47.105150   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:28:47.117485   66919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:28:47.236287   66919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:28:47.376334   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:28:47.389397   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:28:47.406551   66919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 01:28:47.406627   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.416736   66919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:28:47.416803   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.427000   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.437833   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.449454   66919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:28:47.460229   66919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:28:47.469737   66919 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:28:47.469800   66919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:28:47.482270   66919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:28:47.491987   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:47.624462   66919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:28:47.759485   66919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:28:47.759546   66919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:28:47.764492   66919 start.go:563] Will wait 60s for crictl version
	I0815 01:28:47.764545   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:47.767890   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:28:47.814241   66919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:28:47.814342   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.842933   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.873241   66919 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 01:28:47.874283   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:47.877389   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.877763   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:47.877793   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.878008   66919 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 01:28:47.881794   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:47.893270   66919 kubeadm.go:883] updating cluster {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:28:47.893412   66919 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 01:28:47.893466   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:47.939402   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:47.939489   66919 ssh_runner.go:195] Run: which lz4
	I0815 01:28:47.943142   66919 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:28:47.947165   66919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:28:47.947191   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 01:28:49.418409   66919 crio.go:462] duration metric: took 1.475291539s to copy over tarball
	I0815 01:28:49.418479   66919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:28:46.493529   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Start
	I0815 01:28:46.493725   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring networks are active...
	I0815 01:28:46.494472   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring network default is active
	I0815 01:28:46.494805   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring network mk-embed-certs-190398 is active
	I0815 01:28:46.495206   67000 main.go:141] libmachine: (embed-certs-190398) Getting domain xml...
	I0815 01:28:46.496037   67000 main.go:141] libmachine: (embed-certs-190398) Creating domain...
	I0815 01:28:47.761636   67000 main.go:141] libmachine: (embed-certs-190398) Waiting to get IP...
	I0815 01:28:47.762736   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:47.763100   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:47.763157   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:47.763070   68098 retry.go:31] will retry after 304.161906ms: waiting for machine to come up
	I0815 01:28:48.068645   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.069177   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.069204   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.069148   68098 retry.go:31] will retry after 275.006558ms: waiting for machine to come up
	I0815 01:28:48.345793   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.346294   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.346331   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.346238   68098 retry.go:31] will retry after 325.359348ms: waiting for machine to come up
	I0815 01:28:48.673903   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.674489   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.674513   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.674447   68098 retry.go:31] will retry after 547.495848ms: waiting for machine to come up
	I0815 01:28:49.223465   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:49.224028   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:49.224062   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:49.223982   68098 retry.go:31] will retry after 471.418796ms: waiting for machine to come up
	I0815 01:28:49.696567   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:49.697064   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:49.697093   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:49.697019   68098 retry.go:31] will retry after 871.173809ms: waiting for machine to come up
	I0815 01:28:52.212767   66919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.794261663s)
	I0815 01:28:52.212795   66919 crio.go:469] duration metric: took 2.794358617s to extract the tarball
	I0815 01:28:52.212803   66919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:28:52.254542   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:52.286548   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:52.286571   66919 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:28:52.286651   66919 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.286675   66919 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 01:28:52.286687   66919 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.286684   66919 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.286704   66919 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.286645   66919 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.286672   66919 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.286649   66919 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288433   66919 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.288441   66919 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.288473   66919 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.288446   66919 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.288429   66919 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.288633   66919 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 01:28:52.526671   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 01:28:52.548397   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.556168   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.560115   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.563338   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.566306   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.576900   66919 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 01:28:52.576955   66919 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 01:28:52.576999   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.579694   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.639727   66919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 01:28:52.639778   66919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.639828   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.697299   66919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 01:28:52.697346   66919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.697397   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.709988   66919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 01:28:52.710026   66919 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 01:28:52.710051   66919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.710072   66919 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.710101   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710109   66919 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 01:28:52.710121   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710128   66919 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.710132   66919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 01:28:52.710146   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.710102   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.710159   66919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.710177   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.710159   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710198   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.768699   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.768764   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.768837   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.768892   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.768933   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.768954   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.800404   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.893131   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.893174   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.893241   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.918186   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.918203   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.918205   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.946507   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:53.037776   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:53.037991   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:53.039379   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:53.077479   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:53.077542   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 01:28:53.077559   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 01:28:53.096763   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 01:28:53.138129   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:53.153330   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 01:28:53.153366   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 01:28:53.153368   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 01:28:53.162469   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 01:28:53.292377   66919 cache_images.go:92] duration metric: took 1.005786902s to LoadCachedImages
	W0815 01:28:53.292485   66919 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0815 01:28:53.292503   66919 kubeadm.go:934] updating node { 192.168.50.21 8443 v1.20.0 crio true true} ...
	I0815 01:28:53.292682   66919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-390782 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:28:53.292781   66919 ssh_runner.go:195] Run: crio config
	I0815 01:28:53.339927   66919 cni.go:84] Creating CNI manager for ""
	I0815 01:28:53.339957   66919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:28:53.339979   66919 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:28:53.340009   66919 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.21 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-390782 NodeName:old-k8s-version-390782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 01:28:53.340183   66919 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-390782"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:28:53.340278   66919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 01:28:53.350016   66919 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:28:53.350117   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:28:53.359379   66919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 01:28:53.375719   66919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:28:53.392054   66919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 01:28:53.409122   66919 ssh_runner.go:195] Run: grep 192.168.50.21	control-plane.minikube.internal$ /etc/hosts
	I0815 01:28:53.412646   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:53.423917   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:53.560712   66919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:28:53.576488   66919 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782 for IP: 192.168.50.21
	I0815 01:28:53.576512   66919 certs.go:194] generating shared ca certs ...
	I0815 01:28:53.576530   66919 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:53.576748   66919 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:28:53.576823   66919 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:28:53.576837   66919 certs.go:256] generating profile certs ...
	I0815 01:28:53.576975   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.key
	I0815 01:28:53.577044   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key.d79afed6
	I0815 01:28:53.577113   66919 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key
	I0815 01:28:53.577274   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:28:53.577323   66919 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:28:53.577337   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:28:53.577369   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:28:53.577400   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:28:53.577431   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:28:53.577529   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:53.578239   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:28:53.622068   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:28:53.648947   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:28:53.681678   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:28:53.719636   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 01:28:53.744500   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:28:53.777941   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:28:53.810631   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:28:53.832906   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:28:53.854487   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:28:53.876448   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:28:53.898487   66919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:28:53.914102   66919 ssh_runner.go:195] Run: openssl version
	I0815 01:28:53.919563   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:28:53.929520   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933730   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933775   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.939056   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:28:53.948749   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:28:53.958451   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962624   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962669   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.967800   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:28:53.977228   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:28:53.986801   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990797   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990842   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.995930   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:28:54.005862   66919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:28:54.010115   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:28:54.015861   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:28:54.021980   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:28:54.028344   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:28:54.034172   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:28:54.040316   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:28:54.046525   66919 kubeadm.go:392] StartCluster: {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:28:54.046624   66919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:28:54.046671   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.086420   66919 cri.go:89] found id: ""
	I0815 01:28:54.086498   66919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:28:54.096425   66919 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:28:54.096449   66919 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:28:54.096500   66919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:28:54.106217   66919 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:28:54.107254   66919 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-390782" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:28:54.107872   66919 kubeconfig.go:62] /home/jenkins/minikube-integration/19443-13088/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-390782" cluster setting kubeconfig missing "old-k8s-version-390782" context setting]
	I0815 01:28:54.109790   66919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:54.140029   66919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:28:54.150180   66919 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.21
	I0815 01:28:54.150237   66919 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:28:54.150251   66919 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:28:54.150308   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.186400   66919 cri.go:89] found id: ""
	I0815 01:28:54.186485   66919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:28:54.203351   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:28:54.212828   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:28:54.212849   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:28:54.212910   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:28:54.221577   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:28:54.221641   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:28:54.230730   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:28:54.239213   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:28:54.239279   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:28:54.248268   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.256909   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:28:54.256968   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.266043   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:28:54.276366   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:28:54.276432   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:28:54.285945   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:28:54.295262   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:54.419237   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.098102   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:50.569917   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:50.570436   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:50.570465   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:50.570394   68098 retry.go:31] will retry after 775.734951ms: waiting for machine to come up
	I0815 01:28:51.347459   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:51.347917   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:51.347944   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:51.347869   68098 retry.go:31] will retry after 1.319265032s: waiting for machine to come up
	I0815 01:28:52.668564   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:52.669049   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:52.669116   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:52.669015   68098 retry.go:31] will retry after 1.765224181s: waiting for machine to come up
	I0815 01:28:54.435556   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:54.436039   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:54.436071   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:54.435975   68098 retry.go:31] will retry after 1.545076635s: waiting for machine to come up
	I0815 01:28:55.318597   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.420419   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.514727   66919 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:28:55.514825   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.015883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.515816   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.015709   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.515895   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.015127   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.515796   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.014975   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.515893   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:00.015918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:55.982693   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:55.983288   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:55.983328   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:55.983112   68098 retry.go:31] will retry after 2.788039245s: waiting for machine to come up
	I0815 01:28:58.773761   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:58.774166   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:58.774194   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:58.774087   68098 retry.go:31] will retry after 2.531335813s: waiting for machine to come up
	I0815 01:29:00.514933   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.015014   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.515780   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.015534   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.515502   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.015539   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.515643   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.015544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.515786   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:05.015882   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.309051   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:01.309593   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:29:01.309634   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:29:01.309552   68098 retry.go:31] will retry after 3.239280403s: waiting for machine to come up
	I0815 01:29:04.552370   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.552978   67000 main.go:141] libmachine: (embed-certs-190398) Found IP for machine: 192.168.72.151
	I0815 01:29:04.553002   67000 main.go:141] libmachine: (embed-certs-190398) Reserving static IP address...
	I0815 01:29:04.553047   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has current primary IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.553427   67000 main.go:141] libmachine: (embed-certs-190398) Reserved static IP address: 192.168.72.151
	I0815 01:29:04.553452   67000 main.go:141] libmachine: (embed-certs-190398) Waiting for SSH to be available...
	I0815 01:29:04.553481   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "embed-certs-190398", mac: "52:54:00:5a:91:1a", ip: "192.168.72.151"} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.553510   67000 main.go:141] libmachine: (embed-certs-190398) DBG | skip adding static IP to network mk-embed-certs-190398 - found existing host DHCP lease matching {name: "embed-certs-190398", mac: "52:54:00:5a:91:1a", ip: "192.168.72.151"}
	I0815 01:29:04.553525   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Getting to WaitForSSH function...
	I0815 01:29:04.555694   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.556036   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.556067   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.556168   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Using SSH client type: external
	I0815 01:29:04.556189   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa (-rw-------)
	I0815 01:29:04.556221   67000 main.go:141] libmachine: (embed-certs-190398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:04.556235   67000 main.go:141] libmachine: (embed-certs-190398) DBG | About to run SSH command:
	I0815 01:29:04.556252   67000 main.go:141] libmachine: (embed-certs-190398) DBG | exit 0
	I0815 01:29:04.680599   67000 main.go:141] libmachine: (embed-certs-190398) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:04.680961   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetConfigRaw
	I0815 01:29:04.681526   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:04.683847   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.684244   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.684270   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.684531   67000 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/config.json ...
	I0815 01:29:04.684755   67000 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:04.684772   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:04.684989   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.687469   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.687823   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.687848   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.687972   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.688135   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.688267   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.688389   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.688525   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.688749   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.688761   67000 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:04.788626   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:04.788670   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:04.788914   67000 buildroot.go:166] provisioning hostname "embed-certs-190398"
	I0815 01:29:04.788940   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:04.789136   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.791721   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.792153   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.792198   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.792398   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.792580   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.792756   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.792861   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.793053   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.793293   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.793312   67000 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-190398 && echo "embed-certs-190398" | sudo tee /etc/hostname
	I0815 01:29:04.910133   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-190398
	
	I0815 01:29:04.910160   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.913241   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.913666   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.913701   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.913887   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.914131   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.914336   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.914491   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.914665   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.914884   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.914909   67000 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-190398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-190398/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-190398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:05.025052   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:05.025089   67000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:05.025115   67000 buildroot.go:174] setting up certificates
	I0815 01:29:05.025127   67000 provision.go:84] configureAuth start
	I0815 01:29:05.025139   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:05.025439   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:05.028224   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.028582   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.028618   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.028753   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.030960   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.031305   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.031335   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.031524   67000 provision.go:143] copyHostCerts
	I0815 01:29:05.031598   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:05.031608   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:05.031663   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:05.031745   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:05.031752   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:05.031773   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:05.031825   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:05.031832   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:05.031849   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:05.031909   67000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.embed-certs-190398 san=[127.0.0.1 192.168.72.151 embed-certs-190398 localhost minikube]
	I0815 01:29:05.246512   67000 provision.go:177] copyRemoteCerts
	I0815 01:29:05.246567   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:05.246590   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.249286   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.249570   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.249609   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.249736   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.249933   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.250109   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.250337   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.330596   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 01:29:05.352611   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:29:05.374001   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:05.394724   67000 provision.go:87] duration metric: took 369.584008ms to configureAuth
	I0815 01:29:05.394750   67000 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:05.394917   67000 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:05.394982   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.397305   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.397620   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.397658   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.397748   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.397924   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.398039   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.398150   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.398297   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:05.398465   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:05.398486   67000 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:05.893255   67451 start.go:364] duration metric: took 3m0.662991861s to acquireMachinesLock for "default-k8s-diff-port-018537"
	I0815 01:29:05.893347   67451 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:29:05.893356   67451 fix.go:54] fixHost starting: 
	I0815 01:29:05.893803   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:05.893846   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:05.910516   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I0815 01:29:05.910882   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:05.911391   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:05.911415   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:05.911748   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:05.911959   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:05.912088   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:05.913672   67451 fix.go:112] recreateIfNeeded on default-k8s-diff-port-018537: state=Stopped err=<nil>
	I0815 01:29:05.913699   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	W0815 01:29:05.913861   67451 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:29:05.915795   67451 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-018537" ...
	I0815 01:29:05.666194   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:05.666225   67000 machine.go:97] duration metric: took 981.45738ms to provisionDockerMachine
	I0815 01:29:05.666241   67000 start.go:293] postStartSetup for "embed-certs-190398" (driver="kvm2")
	I0815 01:29:05.666253   67000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:05.666275   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.666640   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:05.666671   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.669648   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.670098   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.670124   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.670300   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.670507   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.670677   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.670835   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.750950   67000 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:05.755040   67000 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:05.755066   67000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:05.755139   67000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:05.755244   67000 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:05.755366   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:05.764271   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:05.786563   67000 start.go:296] duration metric: took 120.295403ms for postStartSetup
	I0815 01:29:05.786609   67000 fix.go:56] duration metric: took 19.317192467s for fixHost
	I0815 01:29:05.786634   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.789273   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.789677   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.789708   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.789886   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.790082   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.790244   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.790371   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.790654   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:05.790815   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:05.790826   67000 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:05.893102   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685345.869278337
	
	I0815 01:29:05.893123   67000 fix.go:216] guest clock: 1723685345.869278337
	I0815 01:29:05.893131   67000 fix.go:229] Guest: 2024-08-15 01:29:05.869278337 +0000 UTC Remote: 2024-08-15 01:29:05.786613294 +0000 UTC m=+270.290281945 (delta=82.665043ms)
	I0815 01:29:05.893159   67000 fix.go:200] guest clock delta is within tolerance: 82.665043ms
	I0815 01:29:05.893165   67000 start.go:83] releasing machines lock for "embed-certs-190398", held for 19.423784798s
	I0815 01:29:05.893192   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.893484   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:05.896152   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.896528   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.896555   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.896735   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897183   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897392   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897480   67000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:05.897536   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.897681   67000 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:05.897704   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.900443   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900543   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900814   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.900845   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900873   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.900891   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.901123   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.901150   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.901342   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.901346   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.901531   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.901531   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.901708   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.901709   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:06.008891   67000 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:06.014975   67000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:06.158062   67000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:06.164485   67000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:06.164550   67000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:06.180230   67000 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:06.180250   67000 start.go:495] detecting cgroup driver to use...
	I0815 01:29:06.180301   67000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:06.197927   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:06.210821   67000 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:06.210885   67000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:06.225614   67000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:06.239266   67000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:06.357793   67000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:06.511990   67000 docker.go:233] disabling docker service ...
	I0815 01:29:06.512061   67000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:06.529606   67000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:06.547241   67000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:06.689512   67000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:06.807041   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:06.820312   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:06.837948   67000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:06.838011   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.848233   67000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:06.848311   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.858132   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.868009   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.879629   67000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:06.893713   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.907444   67000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.928032   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.943650   67000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:06.957750   67000 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:06.957805   67000 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:06.972288   67000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:06.982187   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:07.154389   67000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:07.287847   67000 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:07.287933   67000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:07.292283   67000 start.go:563] Will wait 60s for crictl version
	I0815 01:29:07.292342   67000 ssh_runner.go:195] Run: which crictl
	I0815 01:29:07.295813   67000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:07.332788   67000 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:07.332889   67000 ssh_runner.go:195] Run: crio --version
	I0815 01:29:07.359063   67000 ssh_runner.go:195] Run: crio --version
	I0815 01:29:07.387496   67000 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:05.917276   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Start
	I0815 01:29:05.917498   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring networks are active...
	I0815 01:29:05.918269   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring network default is active
	I0815 01:29:05.918599   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring network mk-default-k8s-diff-port-018537 is active
	I0815 01:29:05.919147   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Getting domain xml...
	I0815 01:29:05.919829   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Creating domain...
	I0815 01:29:07.208213   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting to get IP...
	I0815 01:29:07.209456   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.209848   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.209933   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.209843   68264 retry.go:31] will retry after 254.654585ms: waiting for machine to come up
	I0815 01:29:07.466248   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.466679   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.466708   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.466644   68264 retry.go:31] will retry after 285.54264ms: waiting for machine to come up
	I0815 01:29:07.754037   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.754537   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.754578   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.754511   68264 retry.go:31] will retry after 336.150506ms: waiting for machine to come up
	I0815 01:29:08.091923   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.092402   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.092444   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:08.092368   68264 retry.go:31] will retry after 591.285134ms: waiting for machine to come up
	I0815 01:29:08.685380   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.685707   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.685735   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:08.685690   68264 retry.go:31] will retry after 701.709425ms: waiting for machine to come up
	I0815 01:29:09.388574   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:09.389026   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:09.389053   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:09.388979   68264 retry.go:31] will retry after 916.264423ms: waiting for machine to come up
	I0815 01:29:05.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.015647   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.514952   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.014969   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.515614   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.015757   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.515184   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.014931   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.515381   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.015761   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.389220   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:07.392416   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:07.392842   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:07.392868   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:07.393095   67000 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:07.396984   67000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:07.410153   67000 kubeadm.go:883] updating cluster {Name:embed-certs-190398 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:07.410275   67000 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:07.410348   67000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:07.447193   67000 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:07.447255   67000 ssh_runner.go:195] Run: which lz4
	I0815 01:29:07.451046   67000 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:29:07.454808   67000 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:29:07.454836   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:29:08.696070   67000 crio.go:462] duration metric: took 1.245060733s to copy over tarball
	I0815 01:29:08.696174   67000 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:29:10.306552   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:10.306969   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:10.307001   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:10.306912   68264 retry.go:31] will retry after 1.186920529s: waiting for machine to come up
	I0815 01:29:11.494832   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:11.495288   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:11.495324   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:11.495213   68264 retry.go:31] will retry after 1.049148689s: waiting for machine to come up
	I0815 01:29:12.546492   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:12.546872   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:12.546898   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:12.546844   68264 retry.go:31] will retry after 1.689384408s: waiting for machine to come up
	I0815 01:29:14.237471   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:14.238081   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:14.238134   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:14.238011   68264 retry.go:31] will retry after 1.557759414s: waiting for machine to come up
	I0815 01:29:10.515131   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.515740   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.015002   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.515169   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.015676   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.515330   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.015193   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.515742   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.015837   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.809989   67000 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.113786525s)
	I0815 01:29:10.810014   67000 crio.go:469] duration metric: took 2.113915636s to extract the tarball
	I0815 01:29:10.810021   67000 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:29:10.845484   67000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:10.886403   67000 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:29:10.886424   67000 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:29:10.886433   67000 kubeadm.go:934] updating node { 192.168.72.151 8443 v1.31.0 crio true true} ...
	I0815 01:29:10.886550   67000 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-190398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:29:10.886646   67000 ssh_runner.go:195] Run: crio config
	I0815 01:29:10.933915   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:29:10.933946   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:10.933963   67000 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:29:10.933985   67000 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-190398 NodeName:embed-certs-190398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:29:10.934114   67000 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-190398"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:29:10.934179   67000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:29:10.943778   67000 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:29:10.943839   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:29:10.952852   67000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0815 01:29:10.968026   67000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:29:10.982813   67000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0815 01:29:10.998314   67000 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I0815 01:29:11.001818   67000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:11.012933   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:11.147060   67000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:11.170825   67000 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398 for IP: 192.168.72.151
	I0815 01:29:11.170850   67000 certs.go:194] generating shared ca certs ...
	I0815 01:29:11.170871   67000 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:11.171064   67000 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:29:11.171131   67000 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:29:11.171146   67000 certs.go:256] generating profile certs ...
	I0815 01:29:11.171251   67000 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/client.key
	I0815 01:29:11.171359   67000 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.key.7cdd5698
	I0815 01:29:11.171414   67000 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.key
	I0815 01:29:11.171556   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:29:11.171593   67000 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:29:11.171602   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:29:11.171624   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:29:11.171647   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:29:11.171676   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:29:11.171730   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:11.172346   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:29:11.208182   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:29:11.236641   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:29:11.277018   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:29:11.304926   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 01:29:11.335397   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:29:11.358309   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:29:11.380632   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:29:11.403736   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:29:11.425086   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:29:11.448037   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:29:11.470461   67000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:29:11.486415   67000 ssh_runner.go:195] Run: openssl version
	I0815 01:29:11.492028   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:29:11.502925   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.507270   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.507323   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.513051   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:29:11.523911   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:29:11.534614   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.538753   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.538813   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.544194   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:29:11.554387   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:29:11.564690   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.568810   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.568873   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.575936   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:29:11.589152   67000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:29:11.594614   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:29:11.601880   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:29:11.609471   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:29:11.617010   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:29:11.623776   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:29:11.629262   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:29:11.634708   67000 kubeadm.go:392] StartCluster: {Name:embed-certs-190398 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:29:11.634821   67000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:29:11.634890   67000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:11.676483   67000 cri.go:89] found id: ""
	I0815 01:29:11.676559   67000 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:29:11.686422   67000 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:29:11.686445   67000 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:29:11.686494   67000 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:29:11.695319   67000 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:29:11.696472   67000 kubeconfig.go:125] found "embed-certs-190398" server: "https://192.168.72.151:8443"
	I0815 01:29:11.699906   67000 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:29:11.709090   67000 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.151
	I0815 01:29:11.709119   67000 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:29:11.709145   67000 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:29:11.709211   67000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:11.742710   67000 cri.go:89] found id: ""
	I0815 01:29:11.742786   67000 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:29:11.758986   67000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:29:11.768078   67000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:29:11.768100   67000 kubeadm.go:157] found existing configuration files:
	
	I0815 01:29:11.768150   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:29:11.776638   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:29:11.776724   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:29:11.785055   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:29:11.793075   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:29:11.793127   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:29:11.801516   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:29:11.809527   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:29:11.809572   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:29:11.817855   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:29:11.826084   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:29:11.826157   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:29:11.835699   67000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:29:11.844943   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:11.961226   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.030548   67000 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069293244s)
	I0815 01:29:13.030577   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.218385   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.302667   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.397530   67000 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:29:13.397630   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.898538   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.398613   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.897833   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.397759   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.798041   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:15.798467   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:15.798512   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:15.798446   68264 retry.go:31] will retry after 2.538040218s: waiting for machine to come up
	I0815 01:29:18.338522   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:18.338961   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:18.338988   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:18.338910   68264 retry.go:31] will retry after 3.121146217s: waiting for machine to come up
	I0815 01:29:15.515901   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.015290   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.514956   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.015924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.515782   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.014890   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.515482   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.515830   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.015304   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.897957   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.910962   67000 api_server.go:72] duration metric: took 2.513430323s to wait for apiserver process to appear ...
	I0815 01:29:15.910999   67000 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:29:15.911033   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.650453   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:18.650485   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:18.650498   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.686925   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:18.686951   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:18.911228   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.915391   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:18.915424   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:19.412000   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:19.419523   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:19.419562   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:19.911102   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:19.918074   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:19.918110   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:20.411662   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:20.417395   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0815 01:29:20.423058   67000 api_server.go:141] control plane version: v1.31.0
	I0815 01:29:20.423081   67000 api_server.go:131] duration metric: took 4.512072378s to wait for apiserver health ...
	I0815 01:29:20.423089   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:29:20.423095   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:20.424876   67000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:29:20.426131   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:29:20.450961   67000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:29:20.474210   67000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:29:20.486417   67000 system_pods.go:59] 8 kube-system pods found
	I0815 01:29:20.486452   67000 system_pods.go:61] "coredns-6f6b679f8f-kgklr" [5e07a5eb-5ff5-4c1c-9fc7-0a266389c235] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:29:20.486463   67000 system_pods.go:61] "etcd-embed-certs-190398" [11567f44-26c0-4cdc-81f4-d7f88eb415e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:29:20.486480   67000 system_pods.go:61] "kube-apiserver-embed-certs-190398" [da9ce1f1-705f-4b23-ace7-794d277e5d44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:29:20.486495   67000 system_pods.go:61] "kube-controller-manager-embed-certs-190398" [0a4c8153-f94c-4d24-9d2f-38e3eebd8649] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:29:20.486509   67000 system_pods.go:61] "kube-proxy-bmddn" [50e8d666-29d5-45b6-82a7-608402dfb7b1] Running
	I0815 01:29:20.486515   67000 system_pods.go:61] "kube-scheduler-embed-certs-190398" [483d04a2-16c4-4c0d-81e2-dbdfa2141981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:29:20.486520   67000 system_pods.go:61] "metrics-server-6867b74b74-sfnng" [c2088569-2e49-4ccd-bd7c-bcd454e75b1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:29:20.486528   67000 system_pods.go:61] "storage-provisioner" [ad082138-0c63-43a5-8052-5a7126a6ec77] Running
	I0815 01:29:20.486534   67000 system_pods.go:74] duration metric: took 12.306432ms to wait for pod list to return data ...
	I0815 01:29:20.486546   67000 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:29:20.489727   67000 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:29:20.489751   67000 node_conditions.go:123] node cpu capacity is 2
	I0815 01:29:20.489763   67000 node_conditions.go:105] duration metric: took 3.21035ms to run NodePressure ...
	I0815 01:29:20.489782   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:21.461547   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:21.462048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:21.462083   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:21.462013   68264 retry.go:31] will retry after 4.52196822s: waiting for machine to come up
	I0815 01:29:20.515183   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.015283   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.515686   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.015404   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.515935   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.015577   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.515114   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.015146   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.515849   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:25.014883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.750707   67000 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:29:20.766067   67000 kubeadm.go:739] kubelet initialised
	I0815 01:29:20.766089   67000 kubeadm.go:740] duration metric: took 15.355118ms waiting for restarted kubelet to initialise ...
	I0815 01:29:20.766099   67000 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:20.771715   67000 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.778596   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.778617   67000 pod_ready.go:81] duration metric: took 6.879509ms for pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.778630   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.778638   67000 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.783422   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "etcd-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.783450   67000 pod_ready.go:81] duration metric: took 4.801812ms for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.783461   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "etcd-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.783473   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.788877   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.788896   67000 pod_ready.go:81] duration metric: took 5.41319ms for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.788904   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.788909   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:22.795340   67000 pod_ready.go:102] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:25.296907   67000 pod_ready.go:102] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:27.201181   66492 start.go:364] duration metric: took 54.426048174s to acquireMachinesLock for "no-preload-884893"
	I0815 01:29:27.201235   66492 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:29:27.201317   66492 fix.go:54] fixHost starting: 
	I0815 01:29:27.201776   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:27.201818   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:27.218816   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0815 01:29:27.219223   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:27.219731   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:29:27.219754   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:27.220146   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:27.220342   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:27.220507   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:29:27.221962   66492 fix.go:112] recreateIfNeeded on no-preload-884893: state=Stopped err=<nil>
	I0815 01:29:27.221988   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	W0815 01:29:27.222177   66492 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:29:27.224523   66492 out.go:177] * Restarting existing kvm2 VM for "no-preload-884893" ...
	I0815 01:29:25.986027   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.986585   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Found IP for machine: 192.168.39.223
	I0815 01:29:25.986616   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has current primary IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.986629   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Reserving static IP address...
	I0815 01:29:25.987034   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-018537", mac: "52:54:00:ec:53:52", ip: "192.168.39.223"} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:25.987066   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | skip adding static IP to network mk-default-k8s-diff-port-018537 - found existing host DHCP lease matching {name: "default-k8s-diff-port-018537", mac: "52:54:00:ec:53:52", ip: "192.168.39.223"}
	I0815 01:29:25.987085   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Reserved static IP address: 192.168.39.223
	I0815 01:29:25.987108   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for SSH to be available...
	I0815 01:29:25.987124   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Getting to WaitForSSH function...
	I0815 01:29:25.989426   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.989800   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:25.989831   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.989937   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Using SSH client type: external
	I0815 01:29:25.989962   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa (-rw-------)
	I0815 01:29:25.990011   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:25.990026   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | About to run SSH command:
	I0815 01:29:25.990048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | exit 0
	I0815 01:29:26.121218   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:26.121655   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetConfigRaw
	I0815 01:29:26.122265   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:26.125083   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.125483   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.125513   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.125757   67451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:29:26.125978   67451 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:26.126004   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:26.126235   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.128419   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.128787   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.128814   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.128963   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.129124   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.129274   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.129420   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.129603   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.129828   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.129843   67451 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:26.236866   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:26.236900   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.237136   67451 buildroot.go:166] provisioning hostname "default-k8s-diff-port-018537"
	I0815 01:29:26.237158   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.237334   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.240243   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.240760   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.240791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.240959   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.241203   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.241415   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.241581   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.241741   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.241903   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.241916   67451 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-018537 && echo "default-k8s-diff-port-018537" | sudo tee /etc/hostname
	I0815 01:29:26.358127   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-018537
	
	I0815 01:29:26.358159   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.361276   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.361664   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.361694   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.361841   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.362013   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.362191   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.362368   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.362517   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.362704   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.362729   67451 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-018537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-018537/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-018537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:26.479326   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:26.479357   67451 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:26.479398   67451 buildroot.go:174] setting up certificates
	I0815 01:29:26.479411   67451 provision.go:84] configureAuth start
	I0815 01:29:26.479440   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.479791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:26.482464   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.482845   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.482873   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.483023   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.485502   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.485960   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.485995   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.486135   67451 provision.go:143] copyHostCerts
	I0815 01:29:26.486194   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:26.486214   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:26.486273   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:26.486384   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:26.486394   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:26.486419   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:26.486480   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:26.486487   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:26.486508   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:26.486573   67451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-018537 san=[127.0.0.1 192.168.39.223 default-k8s-diff-port-018537 localhost minikube]
	I0815 01:29:26.563251   67451 provision.go:177] copyRemoteCerts
	I0815 01:29:26.563309   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:26.563337   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.566141   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.566481   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.566506   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.566737   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.566947   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.567087   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.567208   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:26.650593   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:26.673166   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0815 01:29:26.695563   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:29:26.717169   67451 provision.go:87] duration metric: took 237.742408ms to configureAuth
	I0815 01:29:26.717198   67451 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:26.717373   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:26.717453   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.720247   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.720620   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.720648   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.720815   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.721007   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.721176   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.721302   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.721484   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.721663   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.721681   67451 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:26.972647   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:26.972691   67451 machine.go:97] duration metric: took 846.694776ms to provisionDockerMachine
	I0815 01:29:26.972706   67451 start.go:293] postStartSetup for "default-k8s-diff-port-018537" (driver="kvm2")
	I0815 01:29:26.972716   67451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:26.972731   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:26.973032   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:26.973053   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.975828   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.976300   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.976334   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.976531   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.976827   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.976999   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.977111   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.059130   67451 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:27.062867   67451 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:27.062893   67451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:27.062954   67451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:27.063024   67451 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:27.063119   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:27.072111   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:27.093976   67451 start.go:296] duration metric: took 121.256938ms for postStartSetup
	I0815 01:29:27.094023   67451 fix.go:56] duration metric: took 21.200666941s for fixHost
	I0815 01:29:27.094048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.096548   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.096881   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.096912   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.097059   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.097238   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.097400   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.097511   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.097664   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:27.097842   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:27.097858   67451 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:27.201028   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685367.180566854
	
	I0815 01:29:27.201053   67451 fix.go:216] guest clock: 1723685367.180566854
	I0815 01:29:27.201062   67451 fix.go:229] Guest: 2024-08-15 01:29:27.180566854 +0000 UTC Remote: 2024-08-15 01:29:27.094027897 +0000 UTC m=+201.997769057 (delta=86.538957ms)
	I0815 01:29:27.201100   67451 fix.go:200] guest clock delta is within tolerance: 86.538957ms
	I0815 01:29:27.201107   67451 start.go:83] releasing machines lock for "default-k8s-diff-port-018537", held for 21.307794339s
	I0815 01:29:27.201135   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.201522   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:27.204278   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.204674   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.204703   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.204934   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205501   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205713   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205800   67451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:27.205849   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.206127   67451 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:27.206149   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.208688   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.208858   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209066   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.209092   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209394   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.209551   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.209552   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.209584   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209741   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.209748   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.209952   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.210001   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.210090   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.210256   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.293417   67451 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:27.329491   67451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:27.473782   67451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:27.480357   67451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:27.480432   67451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:27.499552   67451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:27.499582   67451 start.go:495] detecting cgroup driver to use...
	I0815 01:29:27.499650   67451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:27.515626   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:27.534025   67451 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:27.534098   67451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:27.547536   67451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:27.561135   67451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:27.672622   67451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:27.832133   67451 docker.go:233] disabling docker service ...
	I0815 01:29:27.832210   67451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:27.845647   67451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:27.858233   67451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:27.985504   67451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:28.119036   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:28.133844   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:28.151116   67451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:28.151188   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.162173   67451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:28.162250   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.171954   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.182363   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.192943   67451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:28.203684   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.214360   67451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.230572   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.241283   67451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:28.250743   67451 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:28.250804   67451 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:28.263655   67451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:28.273663   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:28.408232   67451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:28.558860   67451 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:28.558933   67451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:28.564390   67451 start.go:563] Will wait 60s for crictl version
	I0815 01:29:28.564508   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:29:28.568351   67451 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:28.616006   67451 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:28.616094   67451 ssh_runner.go:195] Run: crio --version
	I0815 01:29:28.642621   67451 ssh_runner.go:195] Run: crio --version
	I0815 01:29:28.671150   67451 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:28.672626   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:28.675626   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:28.676004   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:28.676038   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:28.676296   67451 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:28.680836   67451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:28.694402   67451 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:28.694519   67451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:28.694574   67451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:28.730337   67451 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:28.730401   67451 ssh_runner.go:195] Run: which lz4
	I0815 01:29:28.734226   67451 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:29:28.738162   67451 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:29:28.738185   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:29:30.016492   67451 crio.go:462] duration metric: took 1.282301387s to copy over tarball
	I0815 01:29:30.016571   67451 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:29:25.515881   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.015741   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.515122   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.014889   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.515108   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.015604   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.515658   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.015319   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.515225   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.015561   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.225775   66492 main.go:141] libmachine: (no-preload-884893) Calling .Start
	I0815 01:29:27.225974   66492 main.go:141] libmachine: (no-preload-884893) Ensuring networks are active...
	I0815 01:29:27.226702   66492 main.go:141] libmachine: (no-preload-884893) Ensuring network default is active
	I0815 01:29:27.227078   66492 main.go:141] libmachine: (no-preload-884893) Ensuring network mk-no-preload-884893 is active
	I0815 01:29:27.227577   66492 main.go:141] libmachine: (no-preload-884893) Getting domain xml...
	I0815 01:29:27.228376   66492 main.go:141] libmachine: (no-preload-884893) Creating domain...
	I0815 01:29:28.609215   66492 main.go:141] libmachine: (no-preload-884893) Waiting to get IP...
	I0815 01:29:28.610043   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:28.610440   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:28.610487   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:28.610415   68431 retry.go:31] will retry after 305.851347ms: waiting for machine to come up
	I0815 01:29:28.918245   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:28.918747   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:28.918770   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:28.918720   68431 retry.go:31] will retry after 368.045549ms: waiting for machine to come up
	I0815 01:29:29.288313   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:29.289013   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:29.289046   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:29.288958   68431 retry.go:31] will retry after 415.68441ms: waiting for machine to come up
	I0815 01:29:29.706767   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:29.707226   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:29.707249   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:29.707180   68431 retry.go:31] will retry after 575.538038ms: waiting for machine to come up
	I0815 01:29:26.795064   67000 pod_ready.go:92] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:26.795085   67000 pod_ready.go:81] duration metric: took 6.006168181s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.795096   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bmddn" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.799159   67000 pod_ready.go:92] pod "kube-proxy-bmddn" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:26.799176   67000 pod_ready.go:81] duration metric: took 4.074526ms for pod "kube-proxy-bmddn" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.799184   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:28.805591   67000 pod_ready.go:102] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:30.306235   67000 pod_ready.go:92] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:30.306262   67000 pod_ready.go:81] duration metric: took 3.507070811s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:30.306273   67000 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:32.131219   67451 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.114619197s)
	I0815 01:29:32.131242   67451 crio.go:469] duration metric: took 2.114723577s to extract the tarball
	I0815 01:29:32.131249   67451 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:29:32.169830   67451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:32.217116   67451 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:29:32.217139   67451 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:29:32.217146   67451 kubeadm.go:934] updating node { 192.168.39.223 8444 v1.31.0 crio true true} ...
	I0815 01:29:32.217245   67451 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-018537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:29:32.217305   67451 ssh_runner.go:195] Run: crio config
	I0815 01:29:32.272237   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:29:32.272257   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:32.272270   67451 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:29:32.272292   67451 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.223 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-018537 NodeName:default-k8s-diff-port-018537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:29:32.272435   67451 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.223
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-018537"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:29:32.272486   67451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:29:32.282454   67451 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:29:32.282510   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:29:32.291448   67451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0815 01:29:32.307026   67451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:29:32.324183   67451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0815 01:29:32.339298   67451 ssh_runner.go:195] Run: grep 192.168.39.223	control-plane.minikube.internal$ /etc/hosts
	I0815 01:29:32.342644   67451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:32.353518   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:32.468014   67451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:32.484049   67451 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537 for IP: 192.168.39.223
	I0815 01:29:32.484075   67451 certs.go:194] generating shared ca certs ...
	I0815 01:29:32.484097   67451 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:32.484263   67451 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:29:32.484313   67451 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:29:32.484326   67451 certs.go:256] generating profile certs ...
	I0815 01:29:32.484436   67451 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.key
	I0815 01:29:32.484511   67451 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.key.141a85fa
	I0815 01:29:32.484564   67451 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.key
	I0815 01:29:32.484747   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:29:32.484787   67451 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:29:32.484797   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:29:32.484828   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:29:32.484869   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:29:32.484896   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:29:32.484953   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:32.485741   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:29:32.521657   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:29:32.556226   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:29:32.585724   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:29:32.619588   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 01:29:32.649821   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:29:32.677343   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:29:32.699622   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:29:32.721142   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:29:32.742388   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:29:32.766476   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:29:32.788341   67451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:29:32.803728   67451 ssh_runner.go:195] Run: openssl version
	I0815 01:29:32.809178   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:29:32.819091   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.823068   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.823119   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.828361   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:29:32.837721   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:29:32.847217   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.851176   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.851220   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.856303   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:29:32.865672   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:29:32.875695   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.879910   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.879961   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.885240   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:29:32.894951   67451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:29:32.899131   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:29:32.904465   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:29:32.910243   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:29:32.915874   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:29:32.921193   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:29:32.926569   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:29:32.931905   67451 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:29:32.932015   67451 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:29:32.932095   67451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:32.967184   67451 cri.go:89] found id: ""
	I0815 01:29:32.967270   67451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:29:32.977083   67451 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:29:32.977105   67451 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:29:32.977146   67451 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:29:32.986934   67451 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:29:32.988393   67451 kubeconfig.go:125] found "default-k8s-diff-port-018537" server: "https://192.168.39.223:8444"
	I0815 01:29:32.991478   67451 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:29:33.000175   67451 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.223
	I0815 01:29:33.000201   67451 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:29:33.000211   67451 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:29:33.000260   67451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:33.042092   67451 cri.go:89] found id: ""
	I0815 01:29:33.042173   67451 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:29:33.058312   67451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:29:33.067931   67451 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:29:33.067951   67451 kubeadm.go:157] found existing configuration files:
	
	I0815 01:29:33.068005   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0815 01:29:33.076467   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:29:33.076532   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:29:33.085318   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0815 01:29:33.093657   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:29:33.093710   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:29:33.102263   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0815 01:29:33.110120   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:29:33.110166   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:29:33.118497   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0815 01:29:33.126969   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:29:33.127017   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:29:33.135332   67451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:29:33.143869   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:33.257728   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.000703   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.223362   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.296248   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.400251   67451 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:29:34.400365   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.901010   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.515518   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.015099   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.514899   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.015422   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.515483   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.015471   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.515843   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.015059   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.514953   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.015692   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.283919   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:30.284357   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:30.284387   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:30.284314   68431 retry.go:31] will retry after 737.00152ms: waiting for machine to come up
	I0815 01:29:31.023083   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:31.023593   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:31.023620   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:31.023541   68431 retry.go:31] will retry after 851.229647ms: waiting for machine to come up
	I0815 01:29:31.876610   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:31.877022   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:31.877051   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:31.876972   68431 retry.go:31] will retry after 914.072719ms: waiting for machine to come up
	I0815 01:29:32.792245   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:32.792723   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:32.792749   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:32.792674   68431 retry.go:31] will retry after 1.383936582s: waiting for machine to come up
	I0815 01:29:34.178425   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:34.178889   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:34.178928   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:34.178825   68431 retry.go:31] will retry after 1.574004296s: waiting for machine to come up
	I0815 01:29:32.314820   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:34.812868   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:35.400782   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.900844   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.400575   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.900769   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.916400   67451 api_server.go:72] duration metric: took 2.516148893s to wait for apiserver process to appear ...
	I0815 01:29:36.916432   67451 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:29:36.916458   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.650207   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:39.650234   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:39.650246   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.704636   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:39.704687   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:39.917074   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.921711   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:39.921742   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:35.514869   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.015361   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.515461   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.015560   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.514995   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.015431   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.515382   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.014971   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.515702   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:40.015185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.754518   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:35.755025   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:35.755049   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:35.754951   68431 retry.go:31] will retry after 1.763026338s: waiting for machine to come up
	I0815 01:29:37.519406   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:37.519910   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:37.519940   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:37.519857   68431 retry.go:31] will retry after 1.953484546s: waiting for machine to come up
	I0815 01:29:39.475118   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:39.475481   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:39.475617   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:39.475446   68431 retry.go:31] will retry after 3.524055081s: waiting for machine to come up
	I0815 01:29:36.813811   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:39.312364   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:40.417362   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:40.421758   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:40.421793   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:40.917290   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:40.929914   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:40.929979   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:41.417095   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:41.422436   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 200:
	ok
	I0815 01:29:41.430162   67451 api_server.go:141] control plane version: v1.31.0
	I0815 01:29:41.430190   67451 api_server.go:131] duration metric: took 4.513750685s to wait for apiserver health ...
	I0815 01:29:41.430201   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:29:41.430210   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:41.432041   67451 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:29:41.433158   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:29:41.465502   67451 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:29:41.488013   67451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:29:41.500034   67451 system_pods.go:59] 8 kube-system pods found
	I0815 01:29:41.500063   67451 system_pods.go:61] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:29:41.500071   67451 system_pods.go:61] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:29:41.500087   67451 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:29:41.500098   67451 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:29:41.500102   67451 system_pods.go:61] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:29:41.500107   67451 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:29:41.500117   67451 system_pods.go:61] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:29:41.500120   67451 system_pods.go:61] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:29:41.500126   67451 system_pods.go:74] duration metric: took 12.091408ms to wait for pod list to return data ...
	I0815 01:29:41.500137   67451 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:29:41.505113   67451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:29:41.505137   67451 node_conditions.go:123] node cpu capacity is 2
	I0815 01:29:41.505154   67451 node_conditions.go:105] duration metric: took 5.005028ms to run NodePressure ...
	I0815 01:29:41.505170   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:41.761818   67451 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:29:41.767941   67451 kubeadm.go:739] kubelet initialised
	I0815 01:29:41.767972   67451 kubeadm.go:740] duration metric: took 6.119306ms waiting for restarted kubelet to initialise ...
	I0815 01:29:41.767980   67451 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:41.774714   67451 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.782833   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.782861   67451 pod_ready.go:81] duration metric: took 8.124705ms for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.782870   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.782877   67451 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.790225   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.790248   67451 pod_ready.go:81] duration metric: took 7.36386ms for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.790259   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.790265   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.797569   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.797592   67451 pod_ready.go:81] duration metric: took 7.320672ms for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.797605   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.797611   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.891391   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.891423   67451 pod_ready.go:81] duration metric: took 93.801865ms for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.891435   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.891442   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:42.291752   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-proxy-s8mfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.291780   67451 pod_ready.go:81] duration metric: took 400.332851ms for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:42.291789   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-proxy-s8mfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.291795   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:42.691923   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.691958   67451 pod_ready.go:81] duration metric: took 400.15227ms for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:42.691970   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.691977   67451 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:43.091932   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:43.091958   67451 pod_ready.go:81] duration metric: took 399.974795ms for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:43.091970   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:43.091976   67451 pod_ready.go:38] duration metric: took 1.323989077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:43.091990   67451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:29:43.103131   67451 ops.go:34] apiserver oom_adj: -16
	I0815 01:29:43.103155   67451 kubeadm.go:597] duration metric: took 10.126043167s to restartPrimaryControlPlane
	I0815 01:29:43.103165   67451 kubeadm.go:394] duration metric: took 10.171275892s to StartCluster
	I0815 01:29:43.103183   67451 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:43.103269   67451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:29:43.105655   67451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:43.105963   67451 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:29:43.106027   67451 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:29:43.106123   67451 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106142   67451 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106162   67451 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.106178   67451 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:29:43.106187   67451 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106200   67451 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-018537"
	I0815 01:29:43.106226   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.106255   67451 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.106274   67451 addons.go:243] addon metrics-server should already be in state true
	I0815 01:29:43.106203   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:43.106363   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.106702   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106731   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.106708   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106789   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106822   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.106963   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.107834   67451 out.go:177] * Verifying Kubernetes components...
	I0815 01:29:43.109186   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:43.127122   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46271
	I0815 01:29:43.127378   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I0815 01:29:43.127380   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I0815 01:29:43.127678   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.127791   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.128078   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.128296   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.128323   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.128466   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.128480   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.128671   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.128844   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.129231   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.129263   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.129768   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.129817   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.130089   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.130125   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.130219   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.130448   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.134347   67451 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.134366   67451 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:29:43.134394   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.134764   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.134801   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.148352   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0815 01:29:43.148713   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0815 01:29:43.148786   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.149196   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.149378   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.149420   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.149838   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.149863   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.149891   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.150092   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.150344   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.150698   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.152063   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.152848   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.154165   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0815 01:29:43.154664   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.155020   67451 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:43.155087   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.155110   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.155596   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.156124   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.156166   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.156340   67451 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:29:43.156366   67451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:29:43.156389   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.157988   67451 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:29:43.159283   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:29:43.159299   67451 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:29:43.159319   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.159668   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.160304   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.160373   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.160866   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.161069   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.161234   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.161395   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.162257   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.162673   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.162702   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.162838   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.163007   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.163179   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.163296   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.175175   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0815 01:29:43.175674   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.176169   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.176193   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.176566   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.176824   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.178342   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.178584   67451 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:29:43.178597   67451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:29:43.178615   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.181058   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.181448   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.181482   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.181577   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.181709   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.181791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.181873   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.318078   67451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:43.341037   67451 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-018537" to be "Ready" ...
	I0815 01:29:43.400964   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:29:43.400993   67451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:29:43.423693   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:29:43.423716   67451 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:29:43.430460   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:29:43.453562   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:29:43.453587   67451 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:29:43.457038   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:29:43.495707   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:29:44.708047   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.25097545s)
	I0815 01:29:44.708106   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708111   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.212373458s)
	I0815 01:29:44.708119   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708129   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708141   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708135   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.277646183s)
	I0815 01:29:44.708182   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708201   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708391   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708409   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708419   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708428   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708531   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.708562   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708568   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708577   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.708586   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708587   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708599   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708605   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708613   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708648   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708614   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708678   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.710192   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.710210   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.710220   67451 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-018537"
	I0815 01:29:44.710196   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.710447   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.710467   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.716452   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.716468   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.716716   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.716737   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.718650   67451 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0815 01:29:44.719796   67451 addons.go:510] duration metric: took 1.613772622s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0815 01:29:40.514981   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.015724   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.515316   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.515738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.515747   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.015794   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:45.015384   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.000581   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:43.001092   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:43.001116   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:43.001045   68431 retry.go:31] will retry after 4.175502286s: waiting for machine to come up
	I0815 01:29:41.313801   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:43.814135   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:47.178102   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.178637   66492 main.go:141] libmachine: (no-preload-884893) Found IP for machine: 192.168.61.166
	I0815 01:29:47.178665   66492 main.go:141] libmachine: (no-preload-884893) Reserving static IP address...
	I0815 01:29:47.178678   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has current primary IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.179108   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "no-preload-884893", mac: "52:54:00:b7:93:c6", ip: "192.168.61.166"} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.179151   66492 main.go:141] libmachine: (no-preload-884893) DBG | skip adding static IP to network mk-no-preload-884893 - found existing host DHCP lease matching {name: "no-preload-884893", mac: "52:54:00:b7:93:c6", ip: "192.168.61.166"}
	I0815 01:29:47.179169   66492 main.go:141] libmachine: (no-preload-884893) Reserved static IP address: 192.168.61.166
	I0815 01:29:47.179188   66492 main.go:141] libmachine: (no-preload-884893) Waiting for SSH to be available...
	I0815 01:29:47.179204   66492 main.go:141] libmachine: (no-preload-884893) DBG | Getting to WaitForSSH function...
	I0815 01:29:47.181522   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.181909   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.181937   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.182038   66492 main.go:141] libmachine: (no-preload-884893) DBG | Using SSH client type: external
	I0815 01:29:47.182070   66492 main.go:141] libmachine: (no-preload-884893) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa (-rw-------)
	I0815 01:29:47.182105   66492 main.go:141] libmachine: (no-preload-884893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.166 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:47.182126   66492 main.go:141] libmachine: (no-preload-884893) DBG | About to run SSH command:
	I0815 01:29:47.182156   66492 main.go:141] libmachine: (no-preload-884893) DBG | exit 0
	I0815 01:29:47.309068   66492 main.go:141] libmachine: (no-preload-884893) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:47.309492   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetConfigRaw
	I0815 01:29:47.310181   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:47.312956   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.313296   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.313327   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.313503   66492 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/config.json ...
	I0815 01:29:47.313720   66492 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:47.313742   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:47.313965   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.315987   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.316252   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.316278   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.316399   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.316555   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.316741   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.316886   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.317071   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.317250   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.317263   66492 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:47.424862   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:47.424894   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.425125   66492 buildroot.go:166] provisioning hostname "no-preload-884893"
	I0815 01:29:47.425156   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.425353   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.428397   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.428802   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.428825   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.429003   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.429185   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.429336   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.429464   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.429650   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.429863   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.429881   66492 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-884893 && echo "no-preload-884893" | sudo tee /etc/hostname
	I0815 01:29:47.552134   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-884893
	
	I0815 01:29:47.552159   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.554997   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.555458   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.555500   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.555742   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.555975   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.556148   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.556320   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.556525   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.556707   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.556733   66492 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-884893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-884893/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-884893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:47.673572   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:47.673608   66492 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:47.673637   66492 buildroot.go:174] setting up certificates
	I0815 01:29:47.673653   66492 provision.go:84] configureAuth start
	I0815 01:29:47.673670   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.674016   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:47.677054   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.677491   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.677526   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.677588   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.680115   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.680510   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.680539   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.680719   66492 provision.go:143] copyHostCerts
	I0815 01:29:47.680772   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:47.680789   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:47.680846   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:47.680962   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:47.680970   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:47.680992   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:47.681057   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:47.681064   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:47.681081   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:47.681129   66492 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.no-preload-884893 san=[127.0.0.1 192.168.61.166 localhost minikube no-preload-884893]
	I0815 01:29:47.828342   66492 provision.go:177] copyRemoteCerts
	I0815 01:29:47.828395   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:47.828416   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.831163   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.831546   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.831576   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.831760   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.831948   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.832109   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.832218   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:47.914745   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:47.938252   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 01:29:47.960492   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:29:47.982681   66492 provision.go:87] duration metric: took 309.010268ms to configureAuth
	I0815 01:29:47.982714   66492 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:47.982971   66492 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:47.983095   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.985798   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.986181   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.986213   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.986383   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.986584   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.986748   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.986935   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.987115   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.987328   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.987346   66492 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:48.264004   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:48.264027   66492 machine.go:97] duration metric: took 950.293757ms to provisionDockerMachine
	I0815 01:29:48.264037   66492 start.go:293] postStartSetup for "no-preload-884893" (driver="kvm2")
	I0815 01:29:48.264047   66492 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:48.264060   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.264375   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:48.264401   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.267376   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.267859   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.267888   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.268115   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.268334   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.268521   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.268713   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.351688   66492 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:48.356871   66492 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:48.356897   66492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:48.356977   66492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:48.357078   66492 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:48.357194   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:48.369590   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:48.397339   66492 start.go:296] duration metric: took 133.287217ms for postStartSetup
	I0815 01:29:48.397389   66492 fix.go:56] duration metric: took 21.196078137s for fixHost
	I0815 01:29:48.397434   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.400353   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.400792   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.400831   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.401118   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.401352   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.401509   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.401707   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.401914   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:48.402132   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:48.402148   66492 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:48.518704   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685388.495787154
	
	I0815 01:29:48.518731   66492 fix.go:216] guest clock: 1723685388.495787154
	I0815 01:29:48.518743   66492 fix.go:229] Guest: 2024-08-15 01:29:48.495787154 +0000 UTC Remote: 2024-08-15 01:29:48.397394567 +0000 UTC m=+358.213942436 (delta=98.392587ms)
	I0815 01:29:48.518771   66492 fix.go:200] guest clock delta is within tolerance: 98.392587ms
	I0815 01:29:48.518779   66492 start.go:83] releasing machines lock for "no-preload-884893", held for 21.317569669s
	I0815 01:29:48.518808   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.519146   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:48.522001   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.522428   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.522461   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.522626   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523145   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523490   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523580   66492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:48.523634   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.523747   66492 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:48.523768   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.527031   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527128   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527408   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.527473   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527563   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.527592   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527709   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.527781   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.527943   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.528173   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.528177   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.528305   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.528417   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.528598   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.610614   66492 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:48.647464   66492 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:48.786666   66492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:48.792525   66492 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:48.792593   66492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:48.807904   66492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:48.807924   66492 start.go:495] detecting cgroup driver to use...
	I0815 01:29:48.807975   66492 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:48.826113   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:48.839376   66492 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:48.839443   66492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:48.852840   66492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:48.866029   66492 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:48.974628   66492 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:49.141375   66492 docker.go:233] disabling docker service ...
	I0815 01:29:49.141447   66492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:49.155650   66492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:49.168527   66492 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:49.295756   66492 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:49.430096   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:49.443508   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:49.460504   66492 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:49.460567   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.470309   66492 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:49.470376   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.480340   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.490326   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.500831   66492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:49.511629   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.522350   66492 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.541871   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.553334   66492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:49.562756   66492 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:49.562817   66492 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:49.575907   66492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:49.586017   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:49.709089   66492 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:49.848506   66492 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:49.848599   66492 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:49.853379   66492 start.go:563] Will wait 60s for crictl version
	I0815 01:29:49.853442   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:49.857695   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:49.897829   66492 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:49.897909   66492 ssh_runner.go:195] Run: crio --version
	I0815 01:29:49.927253   66492 ssh_runner.go:195] Run: crio --version
	I0815 01:29:49.956689   66492 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:45.345209   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:47.844877   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:49.845546   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:45.515828   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.015564   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.515829   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.014916   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.515308   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.014871   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.515182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.015946   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.514892   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.015788   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.957823   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:49.960376   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:49.960741   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:49.960771   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:49.960975   66492 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:49.964703   66492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:49.975918   66492 kubeadm.go:883] updating cluster {Name:no-preload-884893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:49.976078   66492 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:49.976130   66492 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:50.007973   66492 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:50.007997   66492 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:29:50.008034   66492 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:50.008076   66492 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.008092   66492 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.008147   66492 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 01:29:50.008167   66492 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.008238   66492 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.008261   66492 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.008535   66492 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.009666   66492 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.009734   66492 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.009745   66492 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:50.009748   66492 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.009734   66492 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.009768   66492 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.009775   66492 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.009801   66492 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 01:29:46.312368   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:48.312568   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.313249   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.347683   67451 node_ready.go:49] node "default-k8s-diff-port-018537" has status "Ready":"True"
	I0815 01:29:50.347704   67451 node_ready.go:38] duration metric: took 7.006638337s for node "default-k8s-diff-port-018537" to be "Ready" ...
	I0815 01:29:50.347713   67451 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:50.358505   67451 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.364110   67451 pod_ready.go:92] pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.364139   67451 pod_ready.go:81] duration metric: took 5.600464ms for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.364150   67451 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.370186   67451 pod_ready.go:92] pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.370212   67451 pod_ready.go:81] duration metric: took 6.054189ms for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.370223   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.380051   67451 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.380089   67451 pod_ready.go:81] duration metric: took 9.848463ms for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.380107   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.385988   67451 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.386015   67451 pod_ready.go:81] duration metric: took 2.005899675s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.386027   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.390635   67451 pod_ready.go:92] pod "kube-proxy-s8mfb" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.390654   67451 pod_ready.go:81] duration metric: took 4.620554ms for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.390663   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.745424   67451 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.745447   67451 pod_ready.go:81] duration metric: took 354.777631ms for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.745458   67451 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:54.752243   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.515037   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.015346   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.514948   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.015826   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.514876   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.015522   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.515665   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.015480   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.515202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:55.014921   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.224358   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.237723   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0815 01:29:50.240904   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.273259   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.275978   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.277287   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.293030   66492 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0815 01:29:50.293078   66492 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.293135   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.293169   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.425265   66492 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0815 01:29:50.425285   66492 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0815 01:29:50.425307   66492 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0815 01:29:50.425319   66492 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.425319   66492 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.425326   66492 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.425367   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425374   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425375   66492 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0815 01:29:50.425390   66492 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.425415   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425409   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425427   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.425436   66492 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0815 01:29:50.425451   66492 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.425471   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.438767   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.438827   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.477250   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.477290   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.477347   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.477399   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.507338   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.527412   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.618767   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.623557   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.623650   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.623741   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.623773   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.668092   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.738811   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.747865   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 01:29:50.747932   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.747953   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 01:29:50.747983   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.748016   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:50.748026   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.777047   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 01:29:50.777152   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:50.811559   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 01:29:50.811678   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:50.829106   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0815 01:29:50.829115   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0815 01:29:50.829131   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.829161   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0815 01:29:50.829178   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.829206   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:29:50.829276   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 01:29:50.829287   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0815 01:29:50.829319   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0815 01:29:50.829360   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:50.833595   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0815 01:29:50.869008   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.899406   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.070205124s)
	I0815 01:29:52.899446   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0815 01:29:52.899444   66492 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0: (2.070218931s)
	I0815 01:29:52.899466   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:52.899475   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0815 01:29:52.899477   66492 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.03044186s)
	I0815 01:29:52.899510   66492 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0815 01:29:52.899516   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:52.899534   66492 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.899573   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:54.750498   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.850957835s)
	I0815 01:29:54.750533   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0815 01:29:54.750530   66492 ssh_runner.go:235] Completed: which crictl: (1.850936309s)
	I0815 01:29:54.750567   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:54.750593   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:54.750609   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:54.787342   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.314561   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:54.813265   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:56.752530   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:58.752625   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:55.515921   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:55.516020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:55.556467   66919 cri.go:89] found id: ""
	I0815 01:29:55.556495   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.556506   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:55.556514   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:55.556584   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:55.591203   66919 cri.go:89] found id: ""
	I0815 01:29:55.591227   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.591234   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:55.591240   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:55.591319   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:55.628819   66919 cri.go:89] found id: ""
	I0815 01:29:55.628847   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.628858   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:55.628865   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:55.628934   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:55.673750   66919 cri.go:89] found id: ""
	I0815 01:29:55.673779   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.673790   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:55.673798   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:55.673857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:55.717121   66919 cri.go:89] found id: ""
	I0815 01:29:55.717153   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.717164   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:55.717171   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:55.717233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:55.753387   66919 cri.go:89] found id: ""
	I0815 01:29:55.753415   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.753425   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:55.753434   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:55.753507   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:55.787148   66919 cri.go:89] found id: ""
	I0815 01:29:55.787183   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.787194   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:55.787207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:55.787272   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:55.820172   66919 cri.go:89] found id: ""
	I0815 01:29:55.820212   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.820226   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:55.820238   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:55.820260   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:55.869089   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:55.869120   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:55.882614   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:55.882644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:56.004286   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:56.004364   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:56.004382   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:56.077836   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:56.077873   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:58.628976   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:58.642997   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:58.643074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:58.675870   66919 cri.go:89] found id: ""
	I0815 01:29:58.675906   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.675916   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:58.675921   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:58.675971   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:58.708231   66919 cri.go:89] found id: ""
	I0815 01:29:58.708263   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.708271   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:58.708277   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:58.708347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:58.744121   66919 cri.go:89] found id: ""
	I0815 01:29:58.744151   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.744162   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:58.744169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:58.744231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:58.783191   66919 cri.go:89] found id: ""
	I0815 01:29:58.783225   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.783238   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:58.783246   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:58.783315   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:58.821747   66919 cri.go:89] found id: ""
	I0815 01:29:58.821775   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.821785   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:58.821801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:58.821865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:58.859419   66919 cri.go:89] found id: ""
	I0815 01:29:58.859450   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.859458   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:58.859463   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:58.859520   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:58.900959   66919 cri.go:89] found id: ""
	I0815 01:29:58.900988   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.900999   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:58.901006   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:58.901069   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:58.940714   66919 cri.go:89] found id: ""
	I0815 01:29:58.940746   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.940758   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:58.940779   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:58.940796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:58.956973   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:58.957004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:59.024399   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:59.024426   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:59.024439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:59.106170   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:59.106210   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:59.142151   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:59.142181   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:56.948465   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.1978264s)
	I0815 01:29:56.948496   66492 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.161116111s)
	I0815 01:29:56.948602   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:56.948503   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0815 01:29:56.948644   66492 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:56.948718   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:56.985210   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 01:29:56.985331   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:29:58.731174   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.782427987s)
	I0815 01:29:58.731211   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0815 01:29:58.731234   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:58.731284   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:58.731184   66492 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.745828896s)
	I0815 01:29:58.731343   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0815 01:29:57.313743   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:59.814068   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:00.752802   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:02.752939   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:01.696371   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:01.709675   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:01.709748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:01.747907   66919 cri.go:89] found id: ""
	I0815 01:30:01.747934   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.747941   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:01.747949   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:01.748009   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:01.785404   66919 cri.go:89] found id: ""
	I0815 01:30:01.785429   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.785437   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:01.785442   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:01.785499   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:01.820032   66919 cri.go:89] found id: ""
	I0815 01:30:01.820060   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.820068   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:01.820073   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:01.820134   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:01.853219   66919 cri.go:89] found id: ""
	I0815 01:30:01.853257   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.853268   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:01.853276   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:01.853331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:01.895875   66919 cri.go:89] found id: ""
	I0815 01:30:01.895903   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.895915   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:01.895922   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:01.895983   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:01.929753   66919 cri.go:89] found id: ""
	I0815 01:30:01.929785   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.929796   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:01.929803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:01.929865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:01.961053   66919 cri.go:89] found id: ""
	I0815 01:30:01.961087   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.961099   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:01.961107   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:01.961174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:01.993217   66919 cri.go:89] found id: ""
	I0815 01:30:01.993247   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.993258   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:01.993268   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:01.993287   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:02.051367   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:02.051400   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:02.065818   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:02.065851   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:02.150692   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:02.150721   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:02.150738   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:02.262369   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:02.262406   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:04.813873   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:04.829471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:04.829549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:04.871020   66919 cri.go:89] found id: ""
	I0815 01:30:04.871049   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.871058   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:04.871064   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:04.871131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:04.924432   66919 cri.go:89] found id: ""
	I0815 01:30:04.924462   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.924474   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:04.924480   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:04.924543   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:04.972947   66919 cri.go:89] found id: ""
	I0815 01:30:04.972979   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.972991   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:04.972999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:04.973123   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:05.004748   66919 cri.go:89] found id: ""
	I0815 01:30:05.004772   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.004780   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:05.004785   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:05.004850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:05.036064   66919 cri.go:89] found id: ""
	I0815 01:30:05.036093   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.036103   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:05.036110   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:05.036174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:05.074397   66919 cri.go:89] found id: ""
	I0815 01:30:05.074430   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.074457   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:05.074467   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:05.074527   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:05.110796   66919 cri.go:89] found id: ""
	I0815 01:30:05.110821   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.110830   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:05.110836   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:05.110897   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:00.606670   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.875360613s)
	I0815 01:30:00.606701   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0815 01:30:00.606725   66492 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:30:00.606772   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:30:04.297747   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.690945823s)
	I0815 01:30:04.297780   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0815 01:30:04.297811   66492 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:30:04.297881   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:30:05.049009   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 01:30:05.049059   66492 cache_images.go:123] Successfully loaded all cached images
	I0815 01:30:05.049067   66492 cache_images.go:92] duration metric: took 15.041058069s to LoadCachedImages
	I0815 01:30:05.049083   66492 kubeadm.go:934] updating node { 192.168.61.166 8443 v1.31.0 crio true true} ...
	I0815 01:30:05.049215   66492 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-884893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:30:05.049295   66492 ssh_runner.go:195] Run: crio config
	I0815 01:30:05.101896   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:30:05.101915   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:30:05.101925   66492 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:30:05.101953   66492 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.166 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-884893 NodeName:no-preload-884893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:30:05.102129   66492 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.166
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-884893"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.166
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.166"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:30:05.102202   66492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:30:05.114396   66492 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:30:05.114464   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:30:05.124036   66492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0815 01:30:05.141411   66492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:30:05.156888   66492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0815 01:30:05.173630   66492 ssh_runner.go:195] Run: grep 192.168.61.166	control-plane.minikube.internal$ /etc/hosts
	I0815 01:30:05.177421   66492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.166	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:30:05.188839   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:30:02.313495   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:04.812529   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.252826   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:07.254206   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.753065   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.148938   66919 cri.go:89] found id: ""
	I0815 01:30:05.148960   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.148968   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:05.148976   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:05.148986   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:05.202523   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:05.202553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:05.215903   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:05.215935   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:05.294685   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:05.294709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:05.294724   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:05.397494   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:05.397529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:07.946734   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.967265   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:07.967341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:08.005761   66919 cri.go:89] found id: ""
	I0815 01:30:08.005792   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.005808   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:08.005814   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:08.005878   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:08.044124   66919 cri.go:89] found id: ""
	I0815 01:30:08.044154   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.044166   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:08.044173   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:08.044238   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:08.078729   66919 cri.go:89] found id: ""
	I0815 01:30:08.078757   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.078769   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:08.078777   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:08.078841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:08.121988   66919 cri.go:89] found id: ""
	I0815 01:30:08.122020   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.122035   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:08.122042   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:08.122108   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:08.156930   66919 cri.go:89] found id: ""
	I0815 01:30:08.156956   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.156964   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:08.156969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:08.157034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:08.201008   66919 cri.go:89] found id: ""
	I0815 01:30:08.201049   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.201060   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:08.201067   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:08.201128   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:08.241955   66919 cri.go:89] found id: ""
	I0815 01:30:08.241979   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.241987   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:08.241993   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:08.242041   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:08.277271   66919 cri.go:89] found id: ""
	I0815 01:30:08.277307   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.277317   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:08.277328   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:08.277343   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:08.339037   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:08.339082   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:08.355588   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:08.355617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:08.436131   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:08.436157   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:08.436170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:08.541231   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:08.541267   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:05.307306   66492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:30:05.326586   66492 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893 for IP: 192.168.61.166
	I0815 01:30:05.326606   66492 certs.go:194] generating shared ca certs ...
	I0815 01:30:05.326620   66492 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:30:05.326754   66492 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:30:05.326798   66492 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:30:05.326807   66492 certs.go:256] generating profile certs ...
	I0815 01:30:05.326885   66492 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.key
	I0815 01:30:05.326942   66492 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.key.2b09f8c1
	I0815 01:30:05.326975   66492 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.key
	I0815 01:30:05.327152   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:30:05.327216   66492 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:30:05.327231   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:30:05.327260   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:30:05.327292   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:30:05.327315   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:30:05.327353   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:30:05.328116   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:30:05.358988   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:30:05.386047   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:30:05.422046   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:30:05.459608   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 01:30:05.489226   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:30:05.518361   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:30:05.542755   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:30:05.567485   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:30:05.590089   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:30:05.614248   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:30:05.636932   66492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:30:05.652645   66492 ssh_runner.go:195] Run: openssl version
	I0815 01:30:05.658261   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:30:05.668530   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.673009   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.673091   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.678803   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:30:05.689237   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:30:05.699211   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.703378   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.703430   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.708890   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:30:05.718664   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:30:05.729058   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.733298   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.733352   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.738793   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:30:05.749007   66492 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:30:05.753780   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:30:05.759248   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:30:05.764978   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:30:05.770728   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:30:05.775949   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:30:05.781530   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:30:05.786881   66492 kubeadm.go:392] StartCluster: {Name:no-preload-884893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:30:05.786997   66492 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:30:05.787058   66492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:30:05.821591   66492 cri.go:89] found id: ""
	I0815 01:30:05.821662   66492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:30:05.832115   66492 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:30:05.832135   66492 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:30:05.832192   66492 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:30:05.841134   66492 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:30:05.842134   66492 kubeconfig.go:125] found "no-preload-884893" server: "https://192.168.61.166:8443"
	I0815 01:30:05.844248   66492 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:30:05.853112   66492 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.166
	I0815 01:30:05.853149   66492 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:30:05.853161   66492 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:30:05.853200   66492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:30:05.887518   66492 cri.go:89] found id: ""
	I0815 01:30:05.887591   66492 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:30:05.905394   66492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:30:05.914745   66492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:30:05.914763   66492 kubeadm.go:157] found existing configuration files:
	
	I0815 01:30:05.914812   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:30:05.924190   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:30:05.924244   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:30:05.933573   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:30:05.942352   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:30:05.942419   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:30:05.951109   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:30:05.959593   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:30:05.959656   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:30:05.968126   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:30:05.976084   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:30:05.976145   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:30:05.984770   66492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:30:05.993658   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:06.089280   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:06.949649   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.160787   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.231870   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.368542   66492 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:30:07.368644   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.868980   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:08.369588   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:08.395734   66492 api_server.go:72] duration metric: took 1.027190846s to wait for apiserver process to appear ...
	I0815 01:30:08.395760   66492 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:30:08.395782   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:07.313709   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.812159   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.394556   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.394591   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.394610   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.433312   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.433352   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.433366   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.450472   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.450507   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.895986   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.900580   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:30:11.900612   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:30:12.396449   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:12.402073   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:30:12.402097   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:30:12.896742   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:12.902095   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 200:
	ok
	I0815 01:30:12.909261   66492 api_server.go:141] control plane version: v1.31.0
	I0815 01:30:12.909292   66492 api_server.go:131] duration metric: took 4.513523262s to wait for apiserver health ...
	I0815 01:30:12.909304   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:30:12.909312   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:30:12.911002   66492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:30:12.252177   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:14.253401   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.090797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:11.105873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:11.105951   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:11.139481   66919 cri.go:89] found id: ""
	I0815 01:30:11.139509   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.139520   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:11.139528   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:11.139586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:11.176291   66919 cri.go:89] found id: ""
	I0815 01:30:11.176320   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.176329   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:11.176336   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:11.176408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:11.212715   66919 cri.go:89] found id: ""
	I0815 01:30:11.212750   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.212760   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:11.212766   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:11.212824   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:11.247283   66919 cri.go:89] found id: ""
	I0815 01:30:11.247311   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.247321   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:11.247328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:11.247391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:11.280285   66919 cri.go:89] found id: ""
	I0815 01:30:11.280319   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.280332   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:11.280339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:11.280407   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:11.317883   66919 cri.go:89] found id: ""
	I0815 01:30:11.317911   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.317930   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:11.317937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:11.317998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:11.355178   66919 cri.go:89] found id: ""
	I0815 01:30:11.355208   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.355220   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:11.355227   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:11.355287   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:11.390965   66919 cri.go:89] found id: ""
	I0815 01:30:11.390992   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.391004   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:11.391015   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:11.391030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:11.445967   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:11.446004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:11.460539   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:11.460570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:11.537022   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:11.537043   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:11.537058   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:11.625438   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:11.625476   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.175870   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:14.189507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:14.189576   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:14.225227   66919 cri.go:89] found id: ""
	I0815 01:30:14.225255   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.225264   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:14.225271   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:14.225350   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:14.260247   66919 cri.go:89] found id: ""
	I0815 01:30:14.260276   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.260286   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:14.260294   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:14.260364   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:14.295498   66919 cri.go:89] found id: ""
	I0815 01:30:14.295528   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.295538   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:14.295552   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:14.295617   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:14.334197   66919 cri.go:89] found id: ""
	I0815 01:30:14.334228   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.334239   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:14.334247   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:14.334308   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:14.376198   66919 cri.go:89] found id: ""
	I0815 01:30:14.376232   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.376244   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:14.376252   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:14.376313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:14.416711   66919 cri.go:89] found id: ""
	I0815 01:30:14.416744   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.416755   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:14.416763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:14.416823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:14.453890   66919 cri.go:89] found id: ""
	I0815 01:30:14.453917   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.453930   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:14.453952   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:14.454024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:14.497742   66919 cri.go:89] found id: ""
	I0815 01:30:14.497768   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.497776   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:14.497787   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:14.497803   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:14.511938   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:14.511980   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:14.583464   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:14.583490   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:14.583510   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:14.683497   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:14.683540   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.724290   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:14.724327   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:12.912470   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:30:12.924194   66492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:30:12.943292   66492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:30:12.957782   66492 system_pods.go:59] 8 kube-system pods found
	I0815 01:30:12.957825   66492 system_pods.go:61] "coredns-6f6b679f8f-flg2c" [637e4479-8f63-481a-b3d8-c5c4a35ca60a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:30:12.957836   66492 system_pods.go:61] "etcd-no-preload-884893" [f786f812-e4b8-41d4-bf09-1350fee38efb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:30:12.957848   66492 system_pods.go:61] "kube-apiserver-no-preload-884893" [128cfe47-3a25-4d2c-8869-0d2aafa69852] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:30:12.957859   66492 system_pods.go:61] "kube-controller-manager-no-preload-884893" [e1cce704-2092-4350-8b2d-a96b4cb90969] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:30:12.957870   66492 system_pods.go:61] "kube-proxy-l559z" [67d270af-bcf3-4c4a-a917-84a3b4477a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 01:30:12.957889   66492 system_pods.go:61] "kube-scheduler-no-preload-884893" [004b37a2-58c2-431d-b43e-de894b7fa8ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:30:12.957900   66492 system_pods.go:61] "metrics-server-6867b74b74-qnnqs" [397b72b1-60cb-41b6-88c4-cb0c3d9200da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:30:12.957909   66492 system_pods.go:61] "storage-provisioner" [bd489c40-fcf4-400d-af4c-913b511494e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 01:30:12.957919   66492 system_pods.go:74] duration metric: took 14.600496ms to wait for pod list to return data ...
	I0815 01:30:12.957934   66492 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:30:12.964408   66492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:30:12.964437   66492 node_conditions.go:123] node cpu capacity is 2
	I0815 01:30:12.964448   66492 node_conditions.go:105] duration metric: took 6.509049ms to run NodePressure ...
	I0815 01:30:12.964466   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:13.242145   66492 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:30:13.247986   66492 kubeadm.go:739] kubelet initialised
	I0815 01:30:13.248012   66492 kubeadm.go:740] duration metric: took 5.831891ms waiting for restarted kubelet to initialise ...
	I0815 01:30:13.248021   66492 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:30:13.254140   66492 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.260351   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.260378   66492 pod_ready.go:81] duration metric: took 6.20764ms for pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.260388   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.260408   66492 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.265440   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "etcd-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.265464   66492 pod_ready.go:81] duration metric: took 5.046431ms for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.265474   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "etcd-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.265481   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.271153   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "kube-apiserver-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.271173   66492 pod_ready.go:81] duration metric: took 5.686045ms for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.271181   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "kube-apiserver-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.271187   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.346976   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.347001   66492 pod_ready.go:81] duration metric: took 75.806932ms for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.347011   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.347018   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l559z" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.748456   66492 pod_ready.go:92] pod "kube-proxy-l559z" in "kube-system" namespace has status "Ready":"True"
	I0815 01:30:13.748480   66492 pod_ready.go:81] duration metric: took 401.453111ms for pod "kube-proxy-l559z" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.748491   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:11.812458   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:13.813405   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:16.752797   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:19.251123   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:17.277116   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:17.290745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:17.290825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:17.324477   66919 cri.go:89] found id: ""
	I0815 01:30:17.324505   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.324512   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:17.324517   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:17.324573   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:17.356340   66919 cri.go:89] found id: ""
	I0815 01:30:17.356373   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.356384   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:17.356392   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:17.356452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:17.392696   66919 cri.go:89] found id: ""
	I0815 01:30:17.392722   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.392732   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:17.392740   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:17.392802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:17.425150   66919 cri.go:89] found id: ""
	I0815 01:30:17.425182   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.425192   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:17.425200   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:17.425266   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:17.460679   66919 cri.go:89] found id: ""
	I0815 01:30:17.460708   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.460720   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:17.460727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:17.460805   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:17.496881   66919 cri.go:89] found id: ""
	I0815 01:30:17.496914   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.496927   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:17.496933   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:17.496985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:17.528614   66919 cri.go:89] found id: ""
	I0815 01:30:17.528643   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.528668   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:17.528676   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:17.528736   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:17.563767   66919 cri.go:89] found id: ""
	I0815 01:30:17.563792   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.563799   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:17.563809   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:17.563824   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:17.576591   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:17.576619   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:17.647791   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:17.647819   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:17.647832   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:17.722889   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:17.722927   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:17.761118   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:17.761154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:15.756386   66492 pod_ready.go:102] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.255794   66492 pod_ready.go:102] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:19.754538   66492 pod_ready.go:92] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:30:19.754560   66492 pod_ready.go:81] duration metric: took 6.006061814s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:19.754569   66492 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:16.313295   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.313960   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:21.252528   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.753406   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:20.316550   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:20.329377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:20.329452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:20.361773   66919 cri.go:89] found id: ""
	I0815 01:30:20.361805   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.361814   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:20.361820   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:20.361880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:20.394091   66919 cri.go:89] found id: ""
	I0815 01:30:20.394127   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.394138   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:20.394145   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:20.394210   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:20.426882   66919 cri.go:89] found id: ""
	I0815 01:30:20.426910   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.426929   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:20.426937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:20.426998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:20.460629   66919 cri.go:89] found id: ""
	I0815 01:30:20.460678   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.460692   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:20.460699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:20.460764   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:20.492030   66919 cri.go:89] found id: ""
	I0815 01:30:20.492055   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.492063   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:20.492069   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:20.492127   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:20.523956   66919 cri.go:89] found id: ""
	I0815 01:30:20.523986   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.523994   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:20.523999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:20.524058   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:20.556577   66919 cri.go:89] found id: ""
	I0815 01:30:20.556606   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.556617   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:20.556633   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:20.556714   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:20.589322   66919 cri.go:89] found id: ""
	I0815 01:30:20.589357   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.589366   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:20.589374   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:20.589386   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:20.666950   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:20.666993   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:20.703065   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:20.703104   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:20.758120   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:20.758154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:20.773332   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:20.773378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:20.839693   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.340487   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:23.352978   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:23.353034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:23.386376   66919 cri.go:89] found id: ""
	I0815 01:30:23.386401   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.386411   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:23.386418   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:23.386480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:23.422251   66919 cri.go:89] found id: ""
	I0815 01:30:23.422275   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.422283   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:23.422288   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:23.422347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:23.454363   66919 cri.go:89] found id: ""
	I0815 01:30:23.454394   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.454405   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:23.454410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:23.454471   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:23.487211   66919 cri.go:89] found id: ""
	I0815 01:30:23.487240   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.487249   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:23.487255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:23.487313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:23.518655   66919 cri.go:89] found id: ""
	I0815 01:30:23.518680   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.518690   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:23.518695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:23.518749   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:23.553449   66919 cri.go:89] found id: ""
	I0815 01:30:23.553479   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.553489   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:23.553497   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:23.553549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:23.582407   66919 cri.go:89] found id: ""
	I0815 01:30:23.582443   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.582459   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:23.582466   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:23.582519   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:23.612805   66919 cri.go:89] found id: ""
	I0815 01:30:23.612839   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.612849   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:23.612861   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:23.612874   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:23.661661   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:23.661691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:23.674456   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:23.674491   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:23.742734   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.742758   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:23.742772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:23.828791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:23.828830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:21.761680   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.763406   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:20.812796   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.312044   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:25.312289   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:26.252305   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:28.752410   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:26.364924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:26.378354   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:26.378422   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:26.410209   66919 cri.go:89] found id: ""
	I0815 01:30:26.410238   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.410248   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:26.410253   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:26.410299   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:26.443885   66919 cri.go:89] found id: ""
	I0815 01:30:26.443918   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.443929   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:26.443935   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:26.443985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:26.475786   66919 cri.go:89] found id: ""
	I0815 01:30:26.475815   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.475826   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:26.475833   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:26.475898   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:26.510635   66919 cri.go:89] found id: ""
	I0815 01:30:26.510660   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.510669   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:26.510677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:26.510739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:26.542755   66919 cri.go:89] found id: ""
	I0815 01:30:26.542779   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.542787   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:26.542792   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:26.542842   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:26.574825   66919 cri.go:89] found id: ""
	I0815 01:30:26.574896   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.574911   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:26.574919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:26.574979   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:26.612952   66919 cri.go:89] found id: ""
	I0815 01:30:26.612980   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.612991   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:26.612998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:26.613067   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:26.645339   66919 cri.go:89] found id: ""
	I0815 01:30:26.645377   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.645388   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:26.645398   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:26.645415   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:26.659206   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:26.659243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:26.727526   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:26.727552   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:26.727569   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.811277   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:26.811314   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:26.851236   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:26.851270   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.402571   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:29.415017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:29.415095   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:29.448130   66919 cri.go:89] found id: ""
	I0815 01:30:29.448151   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.448159   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:29.448164   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:29.448213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:29.484156   66919 cri.go:89] found id: ""
	I0815 01:30:29.484186   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.484195   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:29.484200   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:29.484248   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:29.519760   66919 cri.go:89] found id: ""
	I0815 01:30:29.519796   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.519806   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:29.519812   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:29.519864   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:29.551336   66919 cri.go:89] found id: ""
	I0815 01:30:29.551363   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.551372   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:29.551377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:29.551428   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:29.584761   66919 cri.go:89] found id: ""
	I0815 01:30:29.584793   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.584804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:29.584811   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:29.584875   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:29.619310   66919 cri.go:89] found id: ""
	I0815 01:30:29.619335   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.619343   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:29.619351   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:29.619408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:29.653976   66919 cri.go:89] found id: ""
	I0815 01:30:29.654005   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.654016   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:29.654030   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:29.654104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:29.685546   66919 cri.go:89] found id: ""
	I0815 01:30:29.685581   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.685588   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:29.685598   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:29.685613   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:29.720766   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:29.720797   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.771174   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:29.771207   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:29.783951   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:29.783979   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:29.853602   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:29.853622   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:29.853634   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.259774   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:28.260345   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:27.312379   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:29.312991   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:31.253803   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:33.752012   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.434032   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:32.447831   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:32.447900   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:32.479056   66919 cri.go:89] found id: ""
	I0815 01:30:32.479086   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.479096   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:32.479102   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:32.479167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:32.511967   66919 cri.go:89] found id: ""
	I0815 01:30:32.512002   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.512014   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:32.512022   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:32.512094   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:32.547410   66919 cri.go:89] found id: ""
	I0815 01:30:32.547433   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.547441   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:32.547446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:32.547494   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:32.580829   66919 cri.go:89] found id: ""
	I0815 01:30:32.580857   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.580867   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:32.580874   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:32.580941   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:32.613391   66919 cri.go:89] found id: ""
	I0815 01:30:32.613502   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.613518   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:32.613529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:32.613619   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:32.645703   66919 cri.go:89] found id: ""
	I0815 01:30:32.645736   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.645747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:32.645754   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:32.645822   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:32.677634   66919 cri.go:89] found id: ""
	I0815 01:30:32.677667   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.677678   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:32.677685   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:32.677740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:32.708400   66919 cri.go:89] found id: ""
	I0815 01:30:32.708481   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.708506   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:32.708521   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:32.708538   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:32.759869   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:32.759907   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:32.773110   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:32.773131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:32.840010   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:32.840031   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:32.840045   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:32.915894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:32.915948   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:30.261620   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.760735   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:34.761802   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:31.813543   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:33.813715   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.752452   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:37.752484   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.752536   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.461001   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:35.473803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:35.473874   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:35.506510   66919 cri.go:89] found id: ""
	I0815 01:30:35.506532   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.506540   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:35.506546   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:35.506593   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:35.540988   66919 cri.go:89] found id: ""
	I0815 01:30:35.541018   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.541028   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:35.541033   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:35.541084   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:35.575687   66919 cri.go:89] found id: ""
	I0815 01:30:35.575713   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.575723   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:35.575730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:35.575789   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:35.606841   66919 cri.go:89] found id: ""
	I0815 01:30:35.606871   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.606878   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:35.606884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:35.606940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:35.641032   66919 cri.go:89] found id: ""
	I0815 01:30:35.641067   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.641079   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:35.641086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:35.641150   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:35.676347   66919 cri.go:89] found id: ""
	I0815 01:30:35.676381   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.676422   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:35.676433   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:35.676497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:35.713609   66919 cri.go:89] found id: ""
	I0815 01:30:35.713634   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.713648   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:35.713655   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:35.713739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:35.751057   66919 cri.go:89] found id: ""
	I0815 01:30:35.751083   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.751094   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:35.751104   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:35.751119   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:35.822909   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:35.822935   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:35.822950   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:35.904146   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:35.904186   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:35.942285   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:35.942316   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:35.990920   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:35.990959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.504900   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:38.518230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:38.518301   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:38.552402   66919 cri.go:89] found id: ""
	I0815 01:30:38.552428   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.552436   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:38.552441   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:38.552500   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:38.588617   66919 cri.go:89] found id: ""
	I0815 01:30:38.588643   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.588668   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:38.588677   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:38.588740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:38.621168   66919 cri.go:89] found id: ""
	I0815 01:30:38.621196   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.621204   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:38.621210   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:38.621258   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:38.654522   66919 cri.go:89] found id: ""
	I0815 01:30:38.654550   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.654559   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:38.654565   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:38.654631   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:38.688710   66919 cri.go:89] found id: ""
	I0815 01:30:38.688735   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.688743   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:38.688748   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:38.688802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:38.720199   66919 cri.go:89] found id: ""
	I0815 01:30:38.720224   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.720235   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:38.720242   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:38.720304   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:38.753996   66919 cri.go:89] found id: ""
	I0815 01:30:38.754026   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.754036   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:38.754043   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:38.754102   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:38.787488   66919 cri.go:89] found id: ""
	I0815 01:30:38.787514   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.787522   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:38.787530   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:38.787542   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:38.840062   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:38.840092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.854501   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:38.854543   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:38.933715   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:38.933749   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:38.933766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:39.010837   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:39.010871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:37.260918   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.263490   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.816265   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:38.313383   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:42.252613   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:44.751882   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:41.552027   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:41.566058   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:41.566136   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:41.603076   66919 cri.go:89] found id: ""
	I0815 01:30:41.603110   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.603123   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:41.603132   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:41.603201   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:41.637485   66919 cri.go:89] found id: ""
	I0815 01:30:41.637524   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.637536   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:41.637543   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:41.637609   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:41.671313   66919 cri.go:89] found id: ""
	I0815 01:30:41.671337   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.671345   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:41.671350   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:41.671399   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:41.704715   66919 cri.go:89] found id: ""
	I0815 01:30:41.704741   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.704752   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:41.704759   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:41.704821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:41.736357   66919 cri.go:89] found id: ""
	I0815 01:30:41.736388   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.736398   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:41.736405   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:41.736465   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:41.770373   66919 cri.go:89] found id: ""
	I0815 01:30:41.770401   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.770409   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:41.770415   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:41.770463   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:41.805965   66919 cri.go:89] found id: ""
	I0815 01:30:41.805990   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.805998   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:41.806003   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:41.806054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:41.841753   66919 cri.go:89] found id: ""
	I0815 01:30:41.841778   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.841786   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:41.841794   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:41.841805   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:41.914515   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:41.914539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:41.914557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:41.988345   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:41.988380   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:42.023814   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:42.023841   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:42.075210   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:42.075243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.589738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:44.602604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:44.602663   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:44.634203   66919 cri.go:89] found id: ""
	I0815 01:30:44.634236   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.634247   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:44.634254   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:44.634341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:44.683449   66919 cri.go:89] found id: ""
	I0815 01:30:44.683480   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.683490   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:44.683495   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:44.683563   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:44.716485   66919 cri.go:89] found id: ""
	I0815 01:30:44.716509   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.716520   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:44.716527   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:44.716595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:44.755708   66919 cri.go:89] found id: ""
	I0815 01:30:44.755737   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.755746   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:44.755755   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:44.755823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:44.791754   66919 cri.go:89] found id: ""
	I0815 01:30:44.791781   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.791790   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:44.791796   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:44.791867   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:44.825331   66919 cri.go:89] found id: ""
	I0815 01:30:44.825355   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.825363   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:44.825369   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:44.825416   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:44.861680   66919 cri.go:89] found id: ""
	I0815 01:30:44.861705   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.861713   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:44.861718   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:44.861770   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:44.898810   66919 cri.go:89] found id: ""
	I0815 01:30:44.898844   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.898857   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:44.898867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:44.898881   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:44.949416   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:44.949449   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.964230   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:44.964258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:45.038989   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:45.039012   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:45.039027   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:45.116311   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:45.116345   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:41.760941   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:43.764802   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:40.811825   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:42.813489   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:45.312497   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:46.753090   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:49.252535   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.658176   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:47.671312   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:47.671375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:47.705772   66919 cri.go:89] found id: ""
	I0815 01:30:47.705800   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.705812   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:47.705819   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:47.705882   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:47.737812   66919 cri.go:89] found id: ""
	I0815 01:30:47.737846   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.737857   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:47.737864   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:47.737928   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:47.773079   66919 cri.go:89] found id: ""
	I0815 01:30:47.773103   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.773114   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:47.773121   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:47.773184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:47.804941   66919 cri.go:89] found id: ""
	I0815 01:30:47.804970   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.804980   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:47.804990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:47.805053   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:47.841215   66919 cri.go:89] found id: ""
	I0815 01:30:47.841249   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.841260   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:47.841266   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:47.841322   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:47.872730   66919 cri.go:89] found id: ""
	I0815 01:30:47.872761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.872772   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:47.872780   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:47.872833   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:47.905731   66919 cri.go:89] found id: ""
	I0815 01:30:47.905761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.905769   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:47.905774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:47.905825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:47.939984   66919 cri.go:89] found id: ""
	I0815 01:30:47.940017   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.940028   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:47.940040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:47.940053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:47.989493   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:47.989526   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:48.002567   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:48.002605   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:48.066691   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:48.066709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:48.066720   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:48.142512   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:48.142551   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:46.260920   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:48.761706   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.813316   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:50.311266   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:51.253220   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.751360   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:50.681288   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:50.695289   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:50.695358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:50.729264   66919 cri.go:89] found id: ""
	I0815 01:30:50.729293   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.729303   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:50.729310   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:50.729374   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:50.765308   66919 cri.go:89] found id: ""
	I0815 01:30:50.765337   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.765348   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:50.765354   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:50.765421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:50.801332   66919 cri.go:89] found id: ""
	I0815 01:30:50.801362   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.801382   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:50.801391   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:50.801452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:50.834822   66919 cri.go:89] found id: ""
	I0815 01:30:50.834855   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.834866   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:50.834873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:50.834937   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:50.868758   66919 cri.go:89] found id: ""
	I0815 01:30:50.868785   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.868804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:50.868817   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:50.868886   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:50.902003   66919 cri.go:89] found id: ""
	I0815 01:30:50.902035   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.902046   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:50.902053   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:50.902113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:50.934517   66919 cri.go:89] found id: ""
	I0815 01:30:50.934546   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.934562   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:50.934569   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:50.934628   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:50.968195   66919 cri.go:89] found id: ""
	I0815 01:30:50.968224   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.968233   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:50.968244   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:50.968258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:51.019140   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:51.019176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:51.032046   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:51.032072   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:51.109532   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:51.109555   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:51.109571   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:51.186978   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:51.187021   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:53.734145   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:53.747075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:53.747146   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:53.779774   66919 cri.go:89] found id: ""
	I0815 01:30:53.779800   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.779807   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:53.779812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:53.779861   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:53.813079   66919 cri.go:89] found id: ""
	I0815 01:30:53.813119   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.813130   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:53.813137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:53.813198   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:53.847148   66919 cri.go:89] found id: ""
	I0815 01:30:53.847179   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.847188   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:53.847195   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:53.847261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:53.880562   66919 cri.go:89] found id: ""
	I0815 01:30:53.880589   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.880596   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:53.880604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:53.880666   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:53.913334   66919 cri.go:89] found id: ""
	I0815 01:30:53.913364   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.913372   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:53.913378   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:53.913436   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:53.946008   66919 cri.go:89] found id: ""
	I0815 01:30:53.946042   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.946052   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:53.946057   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:53.946111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:53.978557   66919 cri.go:89] found id: ""
	I0815 01:30:53.978586   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.978595   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:53.978600   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:53.978653   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:54.010358   66919 cri.go:89] found id: ""
	I0815 01:30:54.010385   66919 logs.go:276] 0 containers: []
	W0815 01:30:54.010392   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:54.010401   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:54.010413   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:54.059780   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:54.059815   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:54.073397   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:54.073428   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:54.140996   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:54.141024   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:54.141039   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:54.215401   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:54.215437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:51.261078   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.261318   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:52.315214   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:54.813501   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:55.751557   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.766434   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:56.756848   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:56.769371   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:56.769434   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:56.806021   66919 cri.go:89] found id: ""
	I0815 01:30:56.806046   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.806076   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:56.806100   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:56.806170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:56.855347   66919 cri.go:89] found id: ""
	I0815 01:30:56.855377   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.855393   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:56.855400   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:56.855464   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:56.898669   66919 cri.go:89] found id: ""
	I0815 01:30:56.898700   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.898710   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:56.898717   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:56.898785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:56.955078   66919 cri.go:89] found id: ""
	I0815 01:30:56.955112   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.955124   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:56.955131   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:56.955205   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:56.987638   66919 cri.go:89] found id: ""
	I0815 01:30:56.987666   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.987674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:56.987680   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:56.987729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:57.019073   66919 cri.go:89] found id: ""
	I0815 01:30:57.019101   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.019109   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:57.019114   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:57.019170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:57.051695   66919 cri.go:89] found id: ""
	I0815 01:30:57.051724   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.051735   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:57.051742   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:57.051804   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:57.085066   66919 cri.go:89] found id: ""
	I0815 01:30:57.085095   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.085106   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:57.085117   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:57.085131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:57.134043   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:57.134080   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:57.147838   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:57.147871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:57.221140   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:57.221174   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:57.221190   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:57.302571   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:57.302607   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:59.841296   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:59.854638   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:59.854700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:59.885940   66919 cri.go:89] found id: ""
	I0815 01:30:59.885963   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.885971   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:59.885976   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:59.886026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:59.918783   66919 cri.go:89] found id: ""
	I0815 01:30:59.918812   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.918824   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:59.918832   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:59.918905   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:59.952122   66919 cri.go:89] found id: ""
	I0815 01:30:59.952153   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.952163   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:59.952169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:59.952233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:59.987303   66919 cri.go:89] found id: ""
	I0815 01:30:59.987331   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.987339   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:59.987344   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:59.987410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:00.024606   66919 cri.go:89] found id: ""
	I0815 01:31:00.024640   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.024666   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:00.024677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:00.024738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:00.055993   66919 cri.go:89] found id: ""
	I0815 01:31:00.056020   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.056031   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:00.056039   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:00.056104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:00.087128   66919 cri.go:89] found id: ""
	I0815 01:31:00.087161   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.087173   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:00.087180   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:00.087249   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:00.120436   66919 cri.go:89] found id: ""
	I0815 01:31:00.120465   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.120476   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:00.120488   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:00.120503   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:55.261504   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.762139   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.312874   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:59.811724   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.252248   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.751908   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.133810   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:00.133838   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:00.199949   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:00.199971   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:00.199984   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:00.284740   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:00.284778   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.321791   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:00.321827   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:02.873253   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:02.885846   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:02.885925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:02.924698   66919 cri.go:89] found id: ""
	I0815 01:31:02.924727   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.924739   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:02.924745   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:02.924807   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:02.961352   66919 cri.go:89] found id: ""
	I0815 01:31:02.961383   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.961391   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:02.961396   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:02.961450   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:02.996293   66919 cri.go:89] found id: ""
	I0815 01:31:02.996327   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.996334   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:02.996341   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:02.996391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:03.028976   66919 cri.go:89] found id: ""
	I0815 01:31:03.029005   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.029013   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:03.029019   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:03.029066   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:03.063388   66919 cri.go:89] found id: ""
	I0815 01:31:03.063425   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.063436   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:03.063445   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:03.063518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:03.099730   66919 cri.go:89] found id: ""
	I0815 01:31:03.099757   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.099767   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:03.099778   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:03.099841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:03.132347   66919 cri.go:89] found id: ""
	I0815 01:31:03.132370   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.132380   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:03.132386   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:03.132495   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:03.165120   66919 cri.go:89] found id: ""
	I0815 01:31:03.165146   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.165153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:03.165161   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:03.165173   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:03.217544   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:03.217576   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:03.232299   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:03.232341   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:03.297458   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:03.297484   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:03.297500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:03.377304   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:03.377338   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.261621   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.760996   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:04.762492   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:01.814111   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:04.311963   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.251139   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:07.252081   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:09.253611   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.915544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:05.929154   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:05.929231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:05.972008   66919 cri.go:89] found id: ""
	I0815 01:31:05.972037   66919 logs.go:276] 0 containers: []
	W0815 01:31:05.972048   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:05.972055   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:05.972119   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:06.005459   66919 cri.go:89] found id: ""
	I0815 01:31:06.005486   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.005494   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:06.005499   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:06.005550   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:06.037623   66919 cri.go:89] found id: ""
	I0815 01:31:06.037655   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.037666   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:06.037674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:06.037733   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:06.070323   66919 cri.go:89] found id: ""
	I0815 01:31:06.070347   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.070356   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:06.070361   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:06.070419   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:06.103570   66919 cri.go:89] found id: ""
	I0815 01:31:06.103593   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.103601   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:06.103606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:06.103654   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:06.136253   66919 cri.go:89] found id: ""
	I0815 01:31:06.136281   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.136291   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:06.136297   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:06.136356   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:06.170851   66919 cri.go:89] found id: ""
	I0815 01:31:06.170878   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.170890   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:06.170895   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:06.170942   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:06.205836   66919 cri.go:89] found id: ""
	I0815 01:31:06.205860   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.205867   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:06.205876   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:06.205892   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:06.282838   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:06.282872   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:06.323867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:06.323898   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:06.378187   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:06.378230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:06.393126   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:06.393160   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:06.460898   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:08.961182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:08.973963   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:08.974048   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:09.007466   66919 cri.go:89] found id: ""
	I0815 01:31:09.007494   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.007502   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:09.007509   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:09.007567   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:09.045097   66919 cri.go:89] found id: ""
	I0815 01:31:09.045123   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.045131   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:09.045137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:09.045187   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:09.078326   66919 cri.go:89] found id: ""
	I0815 01:31:09.078356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.078380   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:09.078389   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:09.078455   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:09.109430   66919 cri.go:89] found id: ""
	I0815 01:31:09.109460   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.109471   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:09.109478   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:09.109544   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:09.143200   66919 cri.go:89] found id: ""
	I0815 01:31:09.143225   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.143234   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:09.143239   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:09.143306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:09.179057   66919 cri.go:89] found id: ""
	I0815 01:31:09.179081   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.179089   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:09.179095   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:09.179141   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:09.213327   66919 cri.go:89] found id: ""
	I0815 01:31:09.213356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.213368   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:09.213375   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:09.213425   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:09.246716   66919 cri.go:89] found id: ""
	I0815 01:31:09.246745   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.246756   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:09.246763   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:09.246775   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:09.299075   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:09.299105   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:09.313023   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:09.313054   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:09.377521   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:09.377545   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:09.377557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:09.453791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:09.453830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:07.260671   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:09.261005   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:06.313082   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:08.812290   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.753344   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.251251   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.991473   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:12.004615   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:12.004707   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:12.045028   66919 cri.go:89] found id: ""
	I0815 01:31:12.045057   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.045066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:12.045072   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:12.045121   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:12.077887   66919 cri.go:89] found id: ""
	I0815 01:31:12.077910   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.077920   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:12.077926   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:12.077974   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:12.110214   66919 cri.go:89] found id: ""
	I0815 01:31:12.110249   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.110260   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:12.110268   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:12.110328   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:12.142485   66919 cri.go:89] found id: ""
	I0815 01:31:12.142509   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.142516   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:12.142522   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:12.142572   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:12.176921   66919 cri.go:89] found id: ""
	I0815 01:31:12.176951   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.176962   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:12.176969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:12.177030   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:12.212093   66919 cri.go:89] found id: ""
	I0815 01:31:12.212142   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.212154   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:12.212162   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:12.212216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:12.246980   66919 cri.go:89] found id: ""
	I0815 01:31:12.247007   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.247017   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:12.247024   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:12.247082   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:12.280888   66919 cri.go:89] found id: ""
	I0815 01:31:12.280918   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.280931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:12.280943   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:12.280959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:12.333891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:12.333923   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:12.346753   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:12.346783   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:12.415652   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:12.415675   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:12.415692   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:12.494669   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:12.494706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.031185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:15.044605   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:15.044704   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:15.081810   66919 cri.go:89] found id: ""
	I0815 01:31:15.081846   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.081860   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:15.081869   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:15.081932   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:15.113517   66919 cri.go:89] found id: ""
	I0815 01:31:15.113550   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.113562   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:15.113568   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:15.113641   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:11.762158   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.260892   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.314672   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:13.811754   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:16.751293   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.752458   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.147638   66919 cri.go:89] found id: ""
	I0815 01:31:15.147665   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.147673   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:15.147679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:15.147746   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:15.178938   66919 cri.go:89] found id: ""
	I0815 01:31:15.178966   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.178976   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:15.178990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:15.179054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:15.212304   66919 cri.go:89] found id: ""
	I0815 01:31:15.212333   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.212346   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:15.212353   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:15.212414   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:15.245991   66919 cri.go:89] found id: ""
	I0815 01:31:15.246012   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.246019   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:15.246025   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:15.246074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:15.280985   66919 cri.go:89] found id: ""
	I0815 01:31:15.281016   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.281034   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:15.281041   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:15.281105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:15.315902   66919 cri.go:89] found id: ""
	I0815 01:31:15.315939   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.315948   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:15.315958   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:15.315973   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:15.329347   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:15.329375   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:15.400366   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:15.400388   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:15.400405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:15.479074   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:15.479118   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.516204   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:15.516230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.070588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:18.083120   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:18.083196   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:18.115673   66919 cri.go:89] found id: ""
	I0815 01:31:18.115701   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.115709   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:18.115715   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:18.115772   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:18.147011   66919 cri.go:89] found id: ""
	I0815 01:31:18.147039   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.147047   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:18.147053   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:18.147126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:18.179937   66919 cri.go:89] found id: ""
	I0815 01:31:18.179960   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.179968   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:18.179973   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:18.180032   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:18.214189   66919 cri.go:89] found id: ""
	I0815 01:31:18.214216   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.214224   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:18.214230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:18.214289   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:18.252102   66919 cri.go:89] found id: ""
	I0815 01:31:18.252130   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.252137   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:18.252143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:18.252204   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:18.285481   66919 cri.go:89] found id: ""
	I0815 01:31:18.285519   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.285529   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:18.285536   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:18.285599   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:18.321609   66919 cri.go:89] found id: ""
	I0815 01:31:18.321636   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.321651   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:18.321660   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:18.321723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:18.352738   66919 cri.go:89] found id: ""
	I0815 01:31:18.352766   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.352774   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:18.352782   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:18.352796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.401481   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:18.401517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:18.414984   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:18.415016   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:18.485539   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:18.485559   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:18.485579   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:18.569611   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:18.569651   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:16.262086   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.760590   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.812958   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:17.813230   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:20.312988   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.255232   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.751939   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.109609   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:21.123972   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:21.124038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:21.157591   66919 cri.go:89] found id: ""
	I0815 01:31:21.157624   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.157636   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:21.157643   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:21.157700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:21.192506   66919 cri.go:89] found id: ""
	I0815 01:31:21.192535   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.192545   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:21.192552   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:21.192623   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:21.224873   66919 cri.go:89] found id: ""
	I0815 01:31:21.224901   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.224912   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:21.224919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:21.224980   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:21.258398   66919 cri.go:89] found id: ""
	I0815 01:31:21.258427   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.258438   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:21.258446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:21.258513   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:21.295754   66919 cri.go:89] found id: ""
	I0815 01:31:21.295781   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.295792   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:21.295799   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:21.295870   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:21.330174   66919 cri.go:89] found id: ""
	I0815 01:31:21.330195   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.330202   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:21.330207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:21.330255   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:21.364565   66919 cri.go:89] found id: ""
	I0815 01:31:21.364588   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.364596   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:21.364639   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:21.364717   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:21.397889   66919 cri.go:89] found id: ""
	I0815 01:31:21.397920   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.397931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:21.397942   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:21.397961   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.471788   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:21.471822   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:21.508837   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:21.508867   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:21.560538   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:21.560575   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:21.575581   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:21.575622   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:21.647798   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.148566   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:24.160745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:24.160813   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:24.192535   66919 cri.go:89] found id: ""
	I0815 01:31:24.192558   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.192566   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:24.192572   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:24.192630   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:24.223468   66919 cri.go:89] found id: ""
	I0815 01:31:24.223499   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.223507   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:24.223513   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:24.223561   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:24.258905   66919 cri.go:89] found id: ""
	I0815 01:31:24.258931   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.258938   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:24.258944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:24.259006   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:24.298914   66919 cri.go:89] found id: ""
	I0815 01:31:24.298942   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.298949   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:24.298955   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:24.299011   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:24.331962   66919 cri.go:89] found id: ""
	I0815 01:31:24.331992   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.332003   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:24.332011   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:24.332078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:24.365984   66919 cri.go:89] found id: ""
	I0815 01:31:24.366014   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.366022   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:24.366028   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:24.366078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:24.402397   66919 cri.go:89] found id: ""
	I0815 01:31:24.402432   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.402442   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:24.402450   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:24.402516   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:24.434662   66919 cri.go:89] found id: ""
	I0815 01:31:24.434691   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.434704   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:24.434714   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:24.434730   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:24.474087   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:24.474117   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:24.524494   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:24.524533   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:24.537770   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:24.537795   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:24.608594   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.608634   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:24.608650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.260845   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.260974   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:22.811939   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:24.812873   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:26.252688   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.751413   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.191588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:27.206339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:27.206421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:27.241277   66919 cri.go:89] found id: ""
	I0815 01:31:27.241306   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.241315   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:27.241321   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:27.241385   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:27.275952   66919 cri.go:89] found id: ""
	I0815 01:31:27.275983   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.275992   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:27.275998   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:27.276060   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:27.308320   66919 cri.go:89] found id: ""
	I0815 01:31:27.308348   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.308359   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:27.308366   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:27.308424   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:27.340957   66919 cri.go:89] found id: ""
	I0815 01:31:27.340987   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.340998   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:27.341007   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:27.341135   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:27.373078   66919 cri.go:89] found id: ""
	I0815 01:31:27.373102   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.373110   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:27.373117   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:27.373182   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:27.409250   66919 cri.go:89] found id: ""
	I0815 01:31:27.409277   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.409289   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:27.409296   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:27.409358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:27.444244   66919 cri.go:89] found id: ""
	I0815 01:31:27.444270   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.444280   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:27.444287   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:27.444360   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:27.482507   66919 cri.go:89] found id: ""
	I0815 01:31:27.482535   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.482543   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:27.482552   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:27.482570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:27.521896   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:27.521931   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:27.575404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:27.575437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:27.587713   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:27.587745   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:27.650431   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:27.650461   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:27.650475   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:25.761255   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.261210   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.312866   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:29.812673   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:30.752414   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:33.252178   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:30.228663   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:30.242782   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:30.242852   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:30.278385   66919 cri.go:89] found id: ""
	I0815 01:31:30.278410   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.278420   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:30.278428   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:30.278483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:30.316234   66919 cri.go:89] found id: ""
	I0815 01:31:30.316258   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.316268   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:30.316276   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:30.316335   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:30.348738   66919 cri.go:89] found id: ""
	I0815 01:31:30.348767   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.348778   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:30.348787   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:30.348851   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:30.380159   66919 cri.go:89] found id: ""
	I0815 01:31:30.380189   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.380201   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:30.380208   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:30.380261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:30.414888   66919 cri.go:89] found id: ""
	I0815 01:31:30.414911   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.414919   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:30.414924   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:30.414977   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:30.447898   66919 cri.go:89] found id: ""
	I0815 01:31:30.447923   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.447931   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:30.447937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:30.448024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:30.479148   66919 cri.go:89] found id: ""
	I0815 01:31:30.479177   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.479187   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:30.479193   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:30.479245   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:30.511725   66919 cri.go:89] found id: ""
	I0815 01:31:30.511752   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.511760   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:30.511768   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:30.511780   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:30.562554   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:30.562590   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:30.575869   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:30.575896   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:30.642642   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:30.642662   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:30.642675   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:30.734491   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:30.734530   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:33.276918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:33.289942   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:33.290010   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:33.322770   66919 cri.go:89] found id: ""
	I0815 01:31:33.322799   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.322806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:33.322813   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:33.322862   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:33.359474   66919 cri.go:89] found id: ""
	I0815 01:31:33.359503   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.359513   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:33.359520   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:33.359590   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:33.391968   66919 cri.go:89] found id: ""
	I0815 01:31:33.391996   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.392007   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:33.392014   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:33.392076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:33.423830   66919 cri.go:89] found id: ""
	I0815 01:31:33.423853   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.423861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:33.423866   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:33.423914   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:33.454991   66919 cri.go:89] found id: ""
	I0815 01:31:33.455014   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.455022   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:33.455027   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:33.455076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:33.492150   66919 cri.go:89] found id: ""
	I0815 01:31:33.492173   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.492181   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:33.492187   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:33.492236   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:33.525206   66919 cri.go:89] found id: ""
	I0815 01:31:33.525237   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.525248   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:33.525255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:33.525331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:33.558939   66919 cri.go:89] found id: ""
	I0815 01:31:33.558973   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.558984   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:33.558995   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:33.559011   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:33.616977   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:33.617029   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:33.629850   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:33.629879   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:33.698029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:33.698052   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:33.698069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:33.776609   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:33.776641   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:30.261492   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.761417   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.761672   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.315096   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.811837   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:35.751307   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:37.753280   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.320299   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:36.333429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:36.333492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:36.366810   66919 cri.go:89] found id: ""
	I0815 01:31:36.366846   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.366858   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:36.366866   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:36.366918   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:36.405898   66919 cri.go:89] found id: ""
	I0815 01:31:36.405930   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.405942   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:36.405949   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:36.406017   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:36.471396   66919 cri.go:89] found id: ""
	I0815 01:31:36.471432   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.471445   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:36.471453   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:36.471524   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:36.504319   66919 cri.go:89] found id: ""
	I0815 01:31:36.504355   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.504367   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:36.504373   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:36.504430   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:36.542395   66919 cri.go:89] found id: ""
	I0815 01:31:36.542423   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.542431   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:36.542437   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:36.542492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:36.576279   66919 cri.go:89] found id: ""
	I0815 01:31:36.576310   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.576320   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:36.576327   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:36.576391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:36.609215   66919 cri.go:89] found id: ""
	I0815 01:31:36.609243   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.609251   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:36.609256   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:36.609306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:36.641911   66919 cri.go:89] found id: ""
	I0815 01:31:36.641936   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.641944   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:36.641952   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:36.641964   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:36.691751   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:36.691784   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:36.704619   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:36.704644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:36.768328   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:36.768348   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:36.768360   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:36.843727   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:36.843759   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:39.381851   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:39.396205   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:39.396284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:39.430646   66919 cri.go:89] found id: ""
	I0815 01:31:39.430673   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.430681   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:39.430688   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:39.430751   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:39.468470   66919 cri.go:89] found id: ""
	I0815 01:31:39.468504   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.468517   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:39.468526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:39.468603   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:39.500377   66919 cri.go:89] found id: ""
	I0815 01:31:39.500407   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.500416   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:39.500423   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:39.500490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:39.532411   66919 cri.go:89] found id: ""
	I0815 01:31:39.532440   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.532447   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:39.532452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:39.532504   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:39.564437   66919 cri.go:89] found id: ""
	I0815 01:31:39.564463   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.564471   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:39.564476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:39.564528   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:39.598732   66919 cri.go:89] found id: ""
	I0815 01:31:39.598757   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.598765   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:39.598771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:39.598837   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:39.640429   66919 cri.go:89] found id: ""
	I0815 01:31:39.640457   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.640469   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:39.640476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:39.640536   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:39.672116   66919 cri.go:89] found id: ""
	I0815 01:31:39.672142   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.672151   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:39.672159   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:39.672171   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:39.721133   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:39.721170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:39.734024   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:39.734060   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:39.799465   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:39.799487   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:39.799501   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:39.880033   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:39.880068   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:37.263319   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:39.762708   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.812954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:39.312718   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:40.251411   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.252627   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.750964   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.421276   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:42.438699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:42.438760   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:42.473213   66919 cri.go:89] found id: ""
	I0815 01:31:42.473239   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.473246   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:42.473251   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:42.473311   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:42.509493   66919 cri.go:89] found id: ""
	I0815 01:31:42.509523   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.509533   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:42.509538   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:42.509594   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:42.543625   66919 cri.go:89] found id: ""
	I0815 01:31:42.543649   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.543659   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:42.543665   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:42.543731   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:42.581756   66919 cri.go:89] found id: ""
	I0815 01:31:42.581784   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.581794   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:42.581801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:42.581865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:42.615595   66919 cri.go:89] found id: ""
	I0815 01:31:42.615618   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.615626   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:42.615631   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:42.615689   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:42.652938   66919 cri.go:89] found id: ""
	I0815 01:31:42.652961   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.652973   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:42.652979   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:42.653026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:42.689362   66919 cri.go:89] found id: ""
	I0815 01:31:42.689391   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.689399   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:42.689406   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:42.689460   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:42.725880   66919 cri.go:89] found id: ""
	I0815 01:31:42.725903   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.725911   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:42.725920   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:42.725932   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:42.798531   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:42.798553   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:42.798567   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:42.878583   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:42.878617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:42.916218   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:42.916245   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:42.968613   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:42.968650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:42.260936   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.262272   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:41.315219   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:43.812950   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:46.751554   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.752369   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.482622   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:45.494847   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:45.494917   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:45.526105   66919 cri.go:89] found id: ""
	I0815 01:31:45.526130   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.526139   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:45.526145   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:45.526195   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:45.558218   66919 cri.go:89] found id: ""
	I0815 01:31:45.558247   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.558258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:45.558265   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:45.558327   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:45.589922   66919 cri.go:89] found id: ""
	I0815 01:31:45.589950   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.589961   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:45.589969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:45.590037   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:45.622639   66919 cri.go:89] found id: ""
	I0815 01:31:45.622670   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.622685   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:45.622690   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:45.622740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:45.659274   66919 cri.go:89] found id: ""
	I0815 01:31:45.659301   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.659309   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:45.659314   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:45.659362   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:45.690768   66919 cri.go:89] found id: ""
	I0815 01:31:45.690795   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.690804   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:45.690810   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:45.690860   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:45.726862   66919 cri.go:89] found id: ""
	I0815 01:31:45.726885   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.726892   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:45.726898   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:45.726943   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:45.761115   66919 cri.go:89] found id: ""
	I0815 01:31:45.761142   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.761153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:45.761164   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:45.761179   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:45.774290   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:45.774335   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:45.843029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:45.843053   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:45.843069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:45.918993   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:45.919032   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:45.955647   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:45.955685   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.506376   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:48.518173   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:48.518234   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:48.550773   66919 cri.go:89] found id: ""
	I0815 01:31:48.550798   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.550806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:48.550812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:48.550865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:48.582398   66919 cri.go:89] found id: ""
	I0815 01:31:48.582431   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.582442   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:48.582449   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:48.582512   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:48.613989   66919 cri.go:89] found id: ""
	I0815 01:31:48.614023   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.614036   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:48.614045   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:48.614114   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:48.645269   66919 cri.go:89] found id: ""
	I0815 01:31:48.645306   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.645317   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:48.645326   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:48.645394   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:48.680588   66919 cri.go:89] found id: ""
	I0815 01:31:48.680615   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.680627   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:48.680636   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:48.680723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:48.719580   66919 cri.go:89] found id: ""
	I0815 01:31:48.719607   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.719615   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:48.719621   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:48.719684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:48.756573   66919 cri.go:89] found id: ""
	I0815 01:31:48.756597   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.756606   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:48.756613   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:48.756684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:48.793983   66919 cri.go:89] found id: ""
	I0815 01:31:48.794018   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.794029   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:48.794040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:48.794053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.847776   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:48.847811   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:48.870731   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:48.870762   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:48.960519   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:48.960548   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:48.960565   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:49.037502   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:49.037535   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:46.761461   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.761907   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.813203   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.313262   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:51.251455   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:53.252808   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:51.576022   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:51.589531   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:51.589595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:51.623964   66919 cri.go:89] found id: ""
	I0815 01:31:51.623991   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.624000   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:51.624008   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:51.624074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:51.657595   66919 cri.go:89] found id: ""
	I0815 01:31:51.657618   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.657626   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:51.657632   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:51.657681   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:51.692462   66919 cri.go:89] found id: ""
	I0815 01:31:51.692490   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.692501   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:51.692507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:51.692570   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:51.724210   66919 cri.go:89] found id: ""
	I0815 01:31:51.724249   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.724259   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:51.724267   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:51.724329   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:51.756450   66919 cri.go:89] found id: ""
	I0815 01:31:51.756476   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.756486   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:51.756493   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:51.756555   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:51.789082   66919 cri.go:89] found id: ""
	I0815 01:31:51.789114   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.789126   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:51.789133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:51.789183   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:51.822390   66919 cri.go:89] found id: ""
	I0815 01:31:51.822420   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.822431   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:51.822438   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:51.822491   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:51.855977   66919 cri.go:89] found id: ""
	I0815 01:31:51.856004   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.856014   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:51.856025   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:51.856040   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:51.904470   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:51.904500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:51.918437   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:51.918466   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:51.991742   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:51.991770   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:51.991785   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:52.065894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:52.065926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:54.602000   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:54.616388   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:54.616466   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:54.675750   66919 cri.go:89] found id: ""
	I0815 01:31:54.675779   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.675793   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:54.675802   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:54.675857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:54.710581   66919 cri.go:89] found id: ""
	I0815 01:31:54.710609   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.710620   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:54.710627   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:54.710691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:54.747267   66919 cri.go:89] found id: ""
	I0815 01:31:54.747304   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.747316   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:54.747325   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:54.747387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:54.784175   66919 cri.go:89] found id: ""
	I0815 01:31:54.784209   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.784221   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:54.784230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:54.784295   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:54.820360   66919 cri.go:89] found id: ""
	I0815 01:31:54.820395   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.820405   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:54.820412   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:54.820480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:54.853176   66919 cri.go:89] found id: ""
	I0815 01:31:54.853204   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.853214   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:54.853222   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:54.853281   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:54.886063   66919 cri.go:89] found id: ""
	I0815 01:31:54.886092   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.886105   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:54.886112   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:54.886171   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:54.919495   66919 cri.go:89] found id: ""
	I0815 01:31:54.919529   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.919540   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:54.919558   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:54.919574   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:54.973177   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:54.973213   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:54.986864   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:54.986899   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:55.052637   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:55.052685   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:55.052700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:51.260314   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:53.261883   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:50.812208   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:52.812356   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:54.812990   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:55.750709   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.751319   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.752400   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:55.133149   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:55.133180   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:57.672833   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:57.686035   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:57.686099   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:57.718612   66919 cri.go:89] found id: ""
	I0815 01:31:57.718641   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.718653   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:57.718661   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:57.718738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:57.752763   66919 cri.go:89] found id: ""
	I0815 01:31:57.752781   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.752788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:57.752793   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:57.752840   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:57.785667   66919 cri.go:89] found id: ""
	I0815 01:31:57.785697   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.785709   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:57.785716   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:57.785776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:57.818775   66919 cri.go:89] found id: ""
	I0815 01:31:57.818804   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.818813   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:57.818821   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:57.818881   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:57.853766   66919 cri.go:89] found id: ""
	I0815 01:31:57.853798   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.853809   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:57.853815   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:57.853880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:57.886354   66919 cri.go:89] found id: ""
	I0815 01:31:57.886379   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.886386   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:57.886392   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:57.886453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:57.920522   66919 cri.go:89] found id: ""
	I0815 01:31:57.920553   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.920576   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:57.920583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:57.920648   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:57.952487   66919 cri.go:89] found id: ""
	I0815 01:31:57.952511   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.952520   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:57.952528   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:57.952541   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:58.003026   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:58.003064   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:58.016516   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:58.016544   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:58.091434   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:58.091459   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:58.091500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:58.170038   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:58.170073   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:55.760430   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.760719   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.761206   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.313073   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.812268   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:02.252033   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:04.252260   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:00.709797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:00.724086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:00.724162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:00.756025   66919 cri.go:89] found id: ""
	I0815 01:32:00.756056   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.756066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:00.756073   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:00.756130   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:00.787831   66919 cri.go:89] found id: ""
	I0815 01:32:00.787858   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.787870   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:00.787880   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:00.787940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:00.821605   66919 cri.go:89] found id: ""
	I0815 01:32:00.821637   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.821644   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:00.821649   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:00.821697   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:00.852708   66919 cri.go:89] found id: ""
	I0815 01:32:00.852732   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.852739   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:00.852745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:00.852790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:00.885392   66919 cri.go:89] found id: ""
	I0815 01:32:00.885426   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.885437   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:00.885446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:00.885506   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:00.916715   66919 cri.go:89] found id: ""
	I0815 01:32:00.916751   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.916763   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:00.916771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:00.916890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:00.949028   66919 cri.go:89] found id: ""
	I0815 01:32:00.949058   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.949069   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:00.949076   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:00.949137   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:00.986364   66919 cri.go:89] found id: ""
	I0815 01:32:00.986399   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.986409   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:00.986419   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:00.986433   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.036475   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:01.036517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:01.049711   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:01.049746   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:01.117283   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:01.117310   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:01.117328   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:01.195453   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:01.195492   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:03.732372   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:03.745944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:03.746005   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:03.780527   66919 cri.go:89] found id: ""
	I0815 01:32:03.780566   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.780578   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:03.780586   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:03.780647   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:03.814147   66919 cri.go:89] found id: ""
	I0815 01:32:03.814170   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.814177   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:03.814184   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:03.814267   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:03.847375   66919 cri.go:89] found id: ""
	I0815 01:32:03.847409   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.847422   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:03.847429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:03.847497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:03.882859   66919 cri.go:89] found id: ""
	I0815 01:32:03.882887   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.882897   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:03.882904   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:03.882972   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:03.916490   66919 cri.go:89] found id: ""
	I0815 01:32:03.916520   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.916528   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:03.916544   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:03.916613   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:03.954789   66919 cri.go:89] found id: ""
	I0815 01:32:03.954819   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.954836   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:03.954844   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:03.954907   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:03.987723   66919 cri.go:89] found id: ""
	I0815 01:32:03.987748   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.987756   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:03.987761   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:03.987810   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:04.020948   66919 cri.go:89] found id: ""
	I0815 01:32:04.020974   66919 logs.go:276] 0 containers: []
	W0815 01:32:04.020981   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:04.020990   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:04.021008   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:04.033466   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:04.033489   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:04.097962   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:04.097989   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:04.098006   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:04.174672   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:04.174706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:04.216198   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:04.216228   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.761354   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:03.762268   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:02.313003   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:04.812280   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.751582   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:08.752387   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.768102   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:06.782370   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:06.782473   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:06.815958   66919 cri.go:89] found id: ""
	I0815 01:32:06.815983   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.815992   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:06.815999   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:06.816059   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:06.848701   66919 cri.go:89] found id: ""
	I0815 01:32:06.848735   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.848748   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:06.848756   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:06.848821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:06.879506   66919 cri.go:89] found id: ""
	I0815 01:32:06.879536   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.879544   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:06.879550   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:06.879607   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:06.915332   66919 cri.go:89] found id: ""
	I0815 01:32:06.915359   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.915371   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:06.915377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:06.915438   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:06.949424   66919 cri.go:89] found id: ""
	I0815 01:32:06.949454   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.949464   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:06.949471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:06.949518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:06.983713   66919 cri.go:89] found id: ""
	I0815 01:32:06.983739   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.983747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:06.983753   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:06.983816   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:07.016165   66919 cri.go:89] found id: ""
	I0815 01:32:07.016196   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.016207   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:07.016214   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:07.016271   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:07.048368   66919 cri.go:89] found id: ""
	I0815 01:32:07.048399   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.048410   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:07.048420   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:07.048435   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:07.100088   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:07.100128   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:07.113430   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:07.113459   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:07.178199   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:07.178223   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:07.178239   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:07.265089   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:07.265121   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:09.804733   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:09.819456   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:09.819530   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:09.850946   66919 cri.go:89] found id: ""
	I0815 01:32:09.850974   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.850981   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:09.850986   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:09.851043   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:09.888997   66919 cri.go:89] found id: ""
	I0815 01:32:09.889028   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.889039   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:09.889045   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:09.889105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:09.921455   66919 cri.go:89] found id: ""
	I0815 01:32:09.921490   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.921503   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:09.921511   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:09.921587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:09.957365   66919 cri.go:89] found id: ""
	I0815 01:32:09.957394   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.957410   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:09.957417   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:09.957477   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:09.988716   66919 cri.go:89] found id: ""
	I0815 01:32:09.988740   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.988753   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:09.988760   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:09.988823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:10.024121   66919 cri.go:89] found id: ""
	I0815 01:32:10.024148   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.024155   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:10.024160   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:10.024208   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:10.056210   66919 cri.go:89] found id: ""
	I0815 01:32:10.056237   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.056247   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:10.056253   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:10.056314   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:10.087519   66919 cri.go:89] found id: ""
	I0815 01:32:10.087551   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.087562   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:10.087574   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:10.087589   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:06.260821   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:08.760889   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.813185   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:09.312608   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:11.251168   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.252911   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:10.142406   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:10.142446   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:10.156134   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:10.156176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:10.230397   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:10.230419   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:10.230432   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:10.315187   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:10.315221   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:12.852055   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:12.864410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:12.864479   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:12.895777   66919 cri.go:89] found id: ""
	I0815 01:32:12.895811   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.895821   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:12.895831   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:12.895902   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:12.928135   66919 cri.go:89] found id: ""
	I0815 01:32:12.928161   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.928171   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:12.928178   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:12.928244   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:12.961837   66919 cri.go:89] found id: ""
	I0815 01:32:12.961867   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.961878   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:12.961885   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:12.961947   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:12.997899   66919 cri.go:89] found id: ""
	I0815 01:32:12.997928   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.997939   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:12.997946   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:12.998008   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:13.032686   66919 cri.go:89] found id: ""
	I0815 01:32:13.032716   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.032725   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:13.032730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:13.032783   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:13.064395   66919 cri.go:89] found id: ""
	I0815 01:32:13.064431   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.064444   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:13.064452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:13.064522   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:13.103618   66919 cri.go:89] found id: ""
	I0815 01:32:13.103646   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.103655   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:13.103661   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:13.103711   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:13.137650   66919 cri.go:89] found id: ""
	I0815 01:32:13.137684   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.137694   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:13.137702   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:13.137715   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:13.189803   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:13.189836   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:13.204059   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:13.204091   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:13.273702   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:13.273723   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:13.273735   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:13.358979   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:13.359037   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:11.260422   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.260760   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:11.812182   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.812777   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:15.752291   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:17.752500   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:15.899388   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:15.911944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:15.912013   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:15.946179   66919 cri.go:89] found id: ""
	I0815 01:32:15.946206   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.946215   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:15.946223   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:15.946284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:15.979700   66919 cri.go:89] found id: ""
	I0815 01:32:15.979725   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.979732   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:15.979738   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:15.979784   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:16.013003   66919 cri.go:89] found id: ""
	I0815 01:32:16.013033   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.013044   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:16.013056   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:16.013113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:16.044824   66919 cri.go:89] found id: ""
	I0815 01:32:16.044851   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.044861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:16.044868   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:16.044930   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:16.076193   66919 cri.go:89] found id: ""
	I0815 01:32:16.076219   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.076227   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:16.076232   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:16.076280   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:16.113747   66919 cri.go:89] found id: ""
	I0815 01:32:16.113775   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.113785   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:16.113795   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:16.113855   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:16.145504   66919 cri.go:89] found id: ""
	I0815 01:32:16.145547   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.145560   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:16.145568   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:16.145637   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:16.181581   66919 cri.go:89] found id: ""
	I0815 01:32:16.181613   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.181623   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:16.181634   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:16.181655   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:16.223644   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:16.223687   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:16.279096   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:16.279131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:16.292132   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:16.292161   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:16.360605   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:16.360624   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:16.360636   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:18.938884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:18.951884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:18.951966   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:18.989163   66919 cri.go:89] found id: ""
	I0815 01:32:18.989192   66919 logs.go:276] 0 containers: []
	W0815 01:32:18.989201   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:18.989206   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:18.989256   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:19.025915   66919 cri.go:89] found id: ""
	I0815 01:32:19.025943   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.025952   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:19.025960   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:19.026028   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:19.062863   66919 cri.go:89] found id: ""
	I0815 01:32:19.062889   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.062899   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:19.062907   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:19.062969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:19.099336   66919 cri.go:89] found id: ""
	I0815 01:32:19.099358   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.099369   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:19.099383   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:19.099442   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:19.130944   66919 cri.go:89] found id: ""
	I0815 01:32:19.130977   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.130988   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:19.130995   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:19.131056   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:19.161353   66919 cri.go:89] found id: ""
	I0815 01:32:19.161381   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.161391   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:19.161398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:19.161454   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:19.195867   66919 cri.go:89] found id: ""
	I0815 01:32:19.195902   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.195915   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:19.195923   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:19.195993   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:19.228851   66919 cri.go:89] found id: ""
	I0815 01:32:19.228886   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.228899   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:19.228919   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:19.228938   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:19.281284   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:19.281320   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:19.294742   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:19.294771   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:19.364684   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:19.364708   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:19.364722   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:19.451057   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:19.451092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:15.261508   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:17.261956   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:19.760608   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:16.312855   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:18.811382   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:20.251898   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:22.252179   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:24.252312   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:21.989302   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:22.002691   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:22.002755   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:22.037079   66919 cri.go:89] found id: ""
	I0815 01:32:22.037101   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.037109   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:22.037115   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:22.037162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:22.069804   66919 cri.go:89] found id: ""
	I0815 01:32:22.069833   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.069842   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:22.069848   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:22.069919   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.102474   66919 cri.go:89] found id: ""
	I0815 01:32:22.102503   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.102515   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:22.102523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:22.102587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:22.137416   66919 cri.go:89] found id: ""
	I0815 01:32:22.137442   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.137449   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:22.137454   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:22.137511   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:22.171153   66919 cri.go:89] found id: ""
	I0815 01:32:22.171182   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.171191   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:22.171198   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:22.171259   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:22.207991   66919 cri.go:89] found id: ""
	I0815 01:32:22.208020   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.208029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:22.208038   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:22.208111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:22.245727   66919 cri.go:89] found id: ""
	I0815 01:32:22.245757   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.245767   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:22.245774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:22.245838   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:22.284478   66919 cri.go:89] found id: ""
	I0815 01:32:22.284502   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.284510   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:22.284518   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:22.284529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:22.297334   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:22.297378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:22.369318   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:22.369342   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:22.369356   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:22.445189   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:22.445226   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:22.486563   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:22.486592   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.037875   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:25.051503   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:25.051580   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:25.090579   66919 cri.go:89] found id: ""
	I0815 01:32:25.090610   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.090622   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:25.090629   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:25.090691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:25.123683   66919 cri.go:89] found id: ""
	I0815 01:32:25.123711   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.123722   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:25.123729   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:25.123790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.261478   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:24.760607   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:20.812971   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:23.311523   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:25.313928   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:26.752024   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.252947   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:25.155715   66919 cri.go:89] found id: ""
	I0815 01:32:25.155744   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.155752   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:25.155757   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:25.155806   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:25.186654   66919 cri.go:89] found id: ""
	I0815 01:32:25.186680   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.186688   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:25.186694   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:25.186741   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:25.218636   66919 cri.go:89] found id: ""
	I0815 01:32:25.218665   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.218674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:25.218679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:25.218729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:25.250018   66919 cri.go:89] found id: ""
	I0815 01:32:25.250046   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.250116   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:25.250147   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:25.250219   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:25.283374   66919 cri.go:89] found id: ""
	I0815 01:32:25.283403   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.283413   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:25.283420   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:25.283483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:25.315240   66919 cri.go:89] found id: ""
	I0815 01:32:25.315260   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.315267   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:25.315274   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:25.315286   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.367212   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:25.367243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:25.380506   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:25.380531   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:25.441106   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:25.441129   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:25.441145   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:25.522791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:25.522828   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.061984   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:28.075091   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:28.075149   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:28.110375   66919 cri.go:89] found id: ""
	I0815 01:32:28.110407   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.110419   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:28.110426   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:28.110490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:28.146220   66919 cri.go:89] found id: ""
	I0815 01:32:28.146249   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.146258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:28.146264   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:28.146317   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:28.177659   66919 cri.go:89] found id: ""
	I0815 01:32:28.177691   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.177702   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:28.177708   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:28.177776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:28.209729   66919 cri.go:89] found id: ""
	I0815 01:32:28.209759   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.209768   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:28.209775   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:28.209835   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:28.241605   66919 cri.go:89] found id: ""
	I0815 01:32:28.241633   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.241642   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:28.241646   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:28.241706   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:28.276697   66919 cri.go:89] found id: ""
	I0815 01:32:28.276722   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.276730   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:28.276735   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:28.276785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:28.309109   66919 cri.go:89] found id: ""
	I0815 01:32:28.309134   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.309144   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:28.309151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:28.309213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:28.348262   66919 cri.go:89] found id: ""
	I0815 01:32:28.348289   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.348303   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:28.348315   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:28.348329   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.387270   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:28.387296   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:28.440454   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:28.440504   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:28.453203   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:28.453233   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:28.523080   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:28.523106   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:28.523123   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:26.761742   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.261323   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:27.812457   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.812954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:31.253078   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:33.755301   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:31.098144   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:31.111396   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:31.111469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:31.143940   66919 cri.go:89] found id: ""
	I0815 01:32:31.143969   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.143977   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:31.143983   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:31.144038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:31.175393   66919 cri.go:89] found id: ""
	I0815 01:32:31.175421   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.175439   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:31.175447   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:31.175509   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:31.213955   66919 cri.go:89] found id: ""
	I0815 01:32:31.213984   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.213993   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:31.213998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:31.214047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:31.245836   66919 cri.go:89] found id: ""
	I0815 01:32:31.245861   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.245868   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:31.245873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:31.245936   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:31.279290   66919 cri.go:89] found id: ""
	I0815 01:32:31.279317   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.279327   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:31.279334   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:31.279408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:31.313898   66919 cri.go:89] found id: ""
	I0815 01:32:31.313926   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.313937   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:31.313944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:31.314020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:31.344466   66919 cri.go:89] found id: ""
	I0815 01:32:31.344502   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.344513   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:31.344521   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:31.344586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:31.375680   66919 cri.go:89] found id: ""
	I0815 01:32:31.375709   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.375721   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:31.375732   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:31.375747   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:31.457005   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:31.457048   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.494656   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:31.494691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:31.546059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:31.546096   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:31.559523   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:31.559553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:31.628402   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.128980   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:34.142151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:34.142216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:34.189425   66919 cri.go:89] found id: ""
	I0815 01:32:34.189453   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.189464   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:34.189470   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:34.189533   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:34.222360   66919 cri.go:89] found id: ""
	I0815 01:32:34.222385   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.222392   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:34.222398   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:34.222453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:34.256275   66919 cri.go:89] found id: ""
	I0815 01:32:34.256302   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.256314   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:34.256322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:34.256387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:34.294104   66919 cri.go:89] found id: ""
	I0815 01:32:34.294130   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.294137   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:34.294143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:34.294214   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:34.330163   66919 cri.go:89] found id: ""
	I0815 01:32:34.330193   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.330205   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:34.330213   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:34.330278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:34.363436   66919 cri.go:89] found id: ""
	I0815 01:32:34.363464   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.363475   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:34.363483   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:34.363540   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:34.399733   66919 cri.go:89] found id: ""
	I0815 01:32:34.399761   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.399772   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:34.399779   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:34.399832   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:34.433574   66919 cri.go:89] found id: ""
	I0815 01:32:34.433781   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.433804   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:34.433820   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:34.433839   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:34.488449   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:34.488496   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:34.502743   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:34.502776   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:34.565666   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.565701   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:34.565718   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:34.639463   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:34.639498   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.262299   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:33.760758   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:32.313372   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:34.812259   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:36.251156   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:38.252330   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:37.189617   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:37.202695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:37.202766   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:37.235556   66919 cri.go:89] found id: ""
	I0815 01:32:37.235589   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.235600   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:37.235608   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:37.235669   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:37.271110   66919 cri.go:89] found id: ""
	I0815 01:32:37.271139   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.271150   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:37.271158   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:37.271216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:37.304294   66919 cri.go:89] found id: ""
	I0815 01:32:37.304325   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.304332   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:37.304337   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:37.304398   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:37.337271   66919 cri.go:89] found id: ""
	I0815 01:32:37.337297   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.337309   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:37.337317   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:37.337377   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:37.373088   66919 cri.go:89] found id: ""
	I0815 01:32:37.373115   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.373126   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:37.373133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:37.373184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:37.407978   66919 cri.go:89] found id: ""
	I0815 01:32:37.408003   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.408011   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:37.408016   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:37.408065   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:37.441966   66919 cri.go:89] found id: ""
	I0815 01:32:37.441999   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.442009   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:37.442017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:37.442079   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:37.473670   66919 cri.go:89] found id: ""
	I0815 01:32:37.473699   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.473710   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:37.473720   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:37.473740   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:37.509174   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:37.509208   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:37.560059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:37.560099   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:37.574425   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:37.574458   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:37.639177   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:37.639199   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:37.639216   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:36.260796   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:38.261082   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:36.813759   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:39.312862   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:40.752526   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:43.251946   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:40.218504   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:40.231523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:40.231626   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:40.266065   66919 cri.go:89] found id: ""
	I0815 01:32:40.266092   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.266102   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:40.266109   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:40.266174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:40.298717   66919 cri.go:89] found id: ""
	I0815 01:32:40.298749   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.298759   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:40.298767   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:40.298821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:40.330633   66919 cri.go:89] found id: ""
	I0815 01:32:40.330660   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.330668   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:40.330674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:40.330738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:40.367840   66919 cri.go:89] found id: ""
	I0815 01:32:40.367866   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.367876   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:40.367884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:40.367953   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:40.403883   66919 cri.go:89] found id: ""
	I0815 01:32:40.403910   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.403921   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:40.403927   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:40.404001   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:40.433989   66919 cri.go:89] found id: ""
	I0815 01:32:40.434016   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.434029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:40.434036   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:40.434098   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:40.468173   66919 cri.go:89] found id: ""
	I0815 01:32:40.468202   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.468213   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:40.468220   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:40.468278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:40.502701   66919 cri.go:89] found id: ""
	I0815 01:32:40.502726   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.502737   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:40.502748   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:40.502772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:40.582716   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:40.582751   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:40.582766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:40.663875   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:40.663910   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.710394   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:40.710439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:40.763015   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:40.763044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.276542   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:43.289311   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:43.289375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:43.334368   66919 cri.go:89] found id: ""
	I0815 01:32:43.334398   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.334408   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:43.334416   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:43.334480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:43.367778   66919 cri.go:89] found id: ""
	I0815 01:32:43.367810   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.367821   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:43.367829   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:43.367890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:43.408036   66919 cri.go:89] found id: ""
	I0815 01:32:43.408060   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.408067   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:43.408072   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:43.408126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:43.442240   66919 cri.go:89] found id: ""
	I0815 01:32:43.442264   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.442276   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:43.442282   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:43.442366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:43.475071   66919 cri.go:89] found id: ""
	I0815 01:32:43.475103   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.475113   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:43.475123   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:43.475189   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:43.508497   66919 cri.go:89] found id: ""
	I0815 01:32:43.508526   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.508536   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:43.508543   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:43.508601   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:43.544292   66919 cri.go:89] found id: ""
	I0815 01:32:43.544315   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.544322   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:43.544328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:43.544390   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:43.582516   66919 cri.go:89] found id: ""
	I0815 01:32:43.582544   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.582556   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:43.582567   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:43.582583   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:43.633821   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:43.633853   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.647453   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:43.647478   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:43.715818   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:43.715839   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:43.715850   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:43.798131   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:43.798167   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.262028   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:42.262223   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:44.760964   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:41.813262   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:43.813491   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:45.751794   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:47.751852   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:49.752186   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:46.337867   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:46.364553   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:46.364629   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:46.426611   66919 cri.go:89] found id: ""
	I0815 01:32:46.426642   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.426654   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:46.426662   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:46.426724   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:46.461160   66919 cri.go:89] found id: ""
	I0815 01:32:46.461194   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.461201   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:46.461206   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:46.461262   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:46.492542   66919 cri.go:89] found id: ""
	I0815 01:32:46.492566   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.492576   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:46.492583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:46.492643   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:46.526035   66919 cri.go:89] found id: ""
	I0815 01:32:46.526060   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.526068   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:46.526075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:46.526131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:46.558867   66919 cri.go:89] found id: ""
	I0815 01:32:46.558895   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.558903   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:46.558909   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:46.558969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:46.593215   66919 cri.go:89] found id: ""
	I0815 01:32:46.593243   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.593258   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:46.593264   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:46.593345   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:46.626683   66919 cri.go:89] found id: ""
	I0815 01:32:46.626710   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.626720   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:46.626727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:46.626786   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:46.660687   66919 cri.go:89] found id: ""
	I0815 01:32:46.660716   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.660727   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:46.660738   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:46.660754   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:46.710639   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:46.710670   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:46.723378   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:46.723402   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:46.790906   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:46.790931   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:46.790946   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:46.876843   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:46.876877   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:49.421563   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:49.434606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:49.434688   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:49.468855   66919 cri.go:89] found id: ""
	I0815 01:32:49.468884   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.468895   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:49.468900   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:49.468958   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:49.507477   66919 cri.go:89] found id: ""
	I0815 01:32:49.507507   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.507519   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:49.507526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:49.507586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:49.539825   66919 cri.go:89] found id: ""
	I0815 01:32:49.539855   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.539866   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:49.539873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:49.539925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:49.570812   66919 cri.go:89] found id: ""
	I0815 01:32:49.570841   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.570851   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:49.570858   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:49.570910   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:49.604327   66919 cri.go:89] found id: ""
	I0815 01:32:49.604356   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.604367   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:49.604374   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:49.604456   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:49.640997   66919 cri.go:89] found id: ""
	I0815 01:32:49.641029   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.641042   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:49.641051   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:49.641116   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:49.673274   66919 cri.go:89] found id: ""
	I0815 01:32:49.673303   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.673314   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:49.673322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:49.673381   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:49.708863   66919 cri.go:89] found id: ""
	I0815 01:32:49.708890   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.708897   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:49.708905   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:49.708916   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:49.759404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:49.759431   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:49.773401   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:49.773429   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:49.842512   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:49.842539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:49.842557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:49.923996   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:49.924030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:46.760999   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:48.762058   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:46.312409   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:48.313081   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:51.752324   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:53.752358   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:52.459672   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:52.472149   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:52.472218   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:52.508168   66919 cri.go:89] found id: ""
	I0815 01:32:52.508193   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.508202   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:52.508207   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:52.508260   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:52.543741   66919 cri.go:89] found id: ""
	I0815 01:32:52.543770   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.543788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:52.543796   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:52.543850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:52.575833   66919 cri.go:89] found id: ""
	I0815 01:32:52.575865   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.575876   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:52.575883   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:52.575950   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:52.607593   66919 cri.go:89] found id: ""
	I0815 01:32:52.607627   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.607638   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:52.607645   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:52.607705   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:52.641726   66919 cri.go:89] found id: ""
	I0815 01:32:52.641748   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.641757   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:52.641763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:52.641820   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:52.673891   66919 cri.go:89] found id: ""
	I0815 01:32:52.673918   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.673926   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:52.673932   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:52.673989   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:52.705405   66919 cri.go:89] found id: ""
	I0815 01:32:52.705465   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.705479   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:52.705488   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:52.705683   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:52.739413   66919 cri.go:89] found id: ""
	I0815 01:32:52.739442   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.739455   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:52.739466   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:52.739481   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:52.791891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:52.791926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:52.806154   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:52.806184   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:52.871807   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:52.871833   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:52.871848   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:52.955257   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:52.955299   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:51.261339   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:53.760453   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:50.811954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:52.814155   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.315451   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.753146   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:58.251418   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.498326   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:55.511596   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:55.511674   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:55.545372   66919 cri.go:89] found id: ""
	I0815 01:32:55.545397   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.545405   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:55.545410   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:55.545469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:55.578661   66919 cri.go:89] found id: ""
	I0815 01:32:55.578687   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.578699   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:55.578706   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:55.578774   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:55.612071   66919 cri.go:89] found id: ""
	I0815 01:32:55.612096   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.612104   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:55.612109   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:55.612167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:55.647842   66919 cri.go:89] found id: ""
	I0815 01:32:55.647870   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.647879   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:55.647884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:55.647946   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:55.683145   66919 cri.go:89] found id: ""
	I0815 01:32:55.683171   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.683179   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:55.683185   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:55.683237   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:55.716485   66919 cri.go:89] found id: ""
	I0815 01:32:55.716513   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.716524   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:55.716529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:55.716588   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:55.751649   66919 cri.go:89] found id: ""
	I0815 01:32:55.751673   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.751681   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:55.751689   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:55.751748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:55.786292   66919 cri.go:89] found id: ""
	I0815 01:32:55.786322   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.786333   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:55.786345   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:55.786362   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:55.837633   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:55.837680   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:55.851624   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:55.851697   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:55.920496   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:55.920518   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:55.920532   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:55.998663   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:55.998700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:58.538202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:58.550630   66919 kubeadm.go:597] duration metric: took 4m4.454171061s to restartPrimaryControlPlane
	W0815 01:32:58.550719   66919 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:32:58.550763   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:32:55.760913   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:57.761301   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:57.812542   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:59.812797   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:00.251492   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.751937   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.968200   66919 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.417406165s)
	I0815 01:33:02.968273   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:02.984328   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:02.994147   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:03.003703   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:03.003745   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:03.003799   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:03.012560   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:03.012629   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:03.021480   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:03.030121   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:03.030185   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:03.039216   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.047790   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:03.047854   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.056508   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:03.065001   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:03.065059   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:03.073818   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:03.286102   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:33:00.260884   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.261081   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:04.261431   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.312430   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:04.811970   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:05.252564   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:07.751944   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:09.752232   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:06.262039   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:08.760900   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:06.812188   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:08.812782   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.752403   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:14.251873   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.261490   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:13.760541   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.312341   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:13.313036   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:16.252242   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:18.252528   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:15.761353   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:18.261298   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:15.812234   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:17.812936   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.312284   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.752195   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:23.253836   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.262317   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:22.760573   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:24.760639   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:22.812596   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:25.313723   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:25.751279   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.751900   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.260523   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:29.261069   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.314902   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:29.812210   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:30.306422   67000 pod_ready.go:81] duration metric: took 4m0.000133706s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" ...
	E0815 01:33:30.306452   67000 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 01:33:30.306487   67000 pod_ready.go:38] duration metric: took 4m9.54037853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:33:30.306516   67000 kubeadm.go:597] duration metric: took 4m18.620065579s to restartPrimaryControlPlane
	W0815 01:33:30.306585   67000 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:33:30.306616   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:33:30.251274   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:32.251733   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:34.261342   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:31.261851   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:33.760731   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:36.752156   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:39.251042   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:35.761425   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:38.260168   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:41.252730   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:43.751914   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:40.260565   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:42.261544   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:44.263225   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:45.752581   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:48.251003   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:46.760884   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:49.259955   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:50.251655   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:52.751031   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:52.751064   67451 pod_ready.go:81] duration metric: took 4m0.00559932s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	E0815 01:33:52.751076   67451 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 01:33:52.751088   67451 pod_ready.go:38] duration metric: took 4m2.403367614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:33:52.751108   67451 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:33:52.751143   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:33:52.751205   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:33:52.795646   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:52.795671   67451 cri.go:89] found id: ""
	I0815 01:33:52.795680   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:33:52.795738   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.800301   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:33:52.800378   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:33:52.832704   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:52.832723   67451 cri.go:89] found id: ""
	I0815 01:33:52.832731   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:33:52.832789   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.836586   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:33:52.836647   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:33:52.871782   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:52.871806   67451 cri.go:89] found id: ""
	I0815 01:33:52.871814   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:33:52.871865   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.875939   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:33:52.876003   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:33:52.911531   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:52.911559   67451 cri.go:89] found id: ""
	I0815 01:33:52.911568   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:33:52.911618   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.915944   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:33:52.916044   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:33:52.950344   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:52.950370   67451 cri.go:89] found id: ""
	I0815 01:33:52.950379   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:33:52.950429   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.954361   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:33:52.954423   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:33:52.988534   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:52.988560   67451 cri.go:89] found id: ""
	I0815 01:33:52.988568   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:33:52.988614   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.992310   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:33:52.992362   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:33:53.024437   67451 cri.go:89] found id: ""
	I0815 01:33:53.024464   67451 logs.go:276] 0 containers: []
	W0815 01:33:53.024472   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:33:53.024477   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:33:53.024540   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:33:53.065265   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:53.065294   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:53.065300   67451 cri.go:89] found id: ""
	I0815 01:33:53.065309   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:33:53.065371   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:53.069355   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:53.073218   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:33:53.073241   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:53.111718   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:33:53.111748   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:53.168887   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:33:53.168916   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:53.205011   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:33:53.205047   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:53.236754   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:33:53.236783   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:33:53.717444   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:33:53.717479   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:33:53.730786   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:33:53.730822   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:53.772883   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:33:53.772915   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:53.811011   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:33:53.811045   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:33:53.850482   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:33:53.850537   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:53.884061   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:33:53.884094   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:33:53.953586   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:33:53.953621   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:33:54.074305   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:33:54.074345   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:51.261543   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:53.761698   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:56.568636   67000 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.261991635s)
	I0815 01:33:56.568725   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:56.585102   67000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:56.595265   67000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:56.606275   67000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:56.606302   67000 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:56.606346   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:56.614847   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:56.614909   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:56.624087   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:56.635940   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:56.635996   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:56.648778   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:56.659984   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:56.660048   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:56.670561   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:56.680716   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:56.680770   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:56.691582   67000 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:56.744053   67000 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:33:56.744448   67000 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:33:56.859803   67000 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:33:56.859986   67000 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:33:56.860126   67000 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:33:56.870201   67000 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:33:56.872775   67000 out.go:204]   - Generating certificates and keys ...
	I0815 01:33:56.872875   67000 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:33:56.872957   67000 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:33:56.873055   67000 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:33:56.873134   67000 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:33:56.873222   67000 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:33:56.873302   67000 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:33:56.873391   67000 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:33:56.873474   67000 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:33:56.873577   67000 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:33:56.873686   67000 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:33:56.873745   67000 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:33:56.873823   67000 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:33:56.993607   67000 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:33:57.204419   67000 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:33:57.427518   67000 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:33:57.816802   67000 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:33:57.976885   67000 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:33:57.977545   67000 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:33:57.980898   67000 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:33:56.622543   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:33:56.645990   67451 api_server.go:72] duration metric: took 4m13.53998694s to wait for apiserver process to appear ...
	I0815 01:33:56.646016   67451 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:33:56.646059   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:33:56.646118   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:33:56.690122   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:56.690169   67451 cri.go:89] found id: ""
	I0815 01:33:56.690180   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:33:56.690253   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.694647   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:33:56.694702   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:33:56.732231   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:56.732269   67451 cri.go:89] found id: ""
	I0815 01:33:56.732279   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:33:56.732341   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.736567   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:33:56.736642   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:33:56.776792   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:56.776816   67451 cri.go:89] found id: ""
	I0815 01:33:56.776827   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:33:56.776886   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.781131   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:33:56.781200   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:33:56.814488   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:56.814514   67451 cri.go:89] found id: ""
	I0815 01:33:56.814524   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:33:56.814598   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.818456   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:33:56.818518   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:33:56.872968   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:56.872988   67451 cri.go:89] found id: ""
	I0815 01:33:56.872998   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:33:56.873059   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.877393   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:33:56.877459   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:33:56.918072   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:56.918169   67451 cri.go:89] found id: ""
	I0815 01:33:56.918185   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:33:56.918247   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.923442   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:33:56.923508   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:33:56.960237   67451 cri.go:89] found id: ""
	I0815 01:33:56.960263   67451 logs.go:276] 0 containers: []
	W0815 01:33:56.960271   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:33:56.960276   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:33:56.960339   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:33:56.995156   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:56.995184   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:56.995189   67451 cri.go:89] found id: ""
	I0815 01:33:56.995195   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:33:56.995253   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.999496   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:57.004450   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:33:57.004478   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:33:57.082294   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:33:57.082336   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:33:57.098629   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:33:57.098662   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:57.132282   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:33:57.132314   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:57.166448   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:33:57.166482   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:57.198997   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:33:57.199027   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:57.232713   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:33:57.232746   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:33:57.684565   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:33:57.684601   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:33:57.736700   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:33:57.736734   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:33:57.847294   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:33:57.847320   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:57.896696   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:33:57.896725   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:57.940766   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:33:57.940799   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:57.979561   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:33:57.979586   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:56.260814   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:58.760911   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:57.982527   67000 out.go:204]   - Booting up control plane ...
	I0815 01:33:57.982632   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:33:57.982740   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:33:57.982828   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:33:58.009596   67000 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:33:58.019089   67000 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:33:58.019165   67000 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:33:58.152279   67000 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:33:58.152459   67000 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:33:58.652446   67000 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.333422ms
	I0815 01:33:58.652548   67000 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:34:03.655057   67000 kubeadm.go:310] [api-check] The API server is healthy after 5.002436765s
	I0815 01:34:03.667810   67000 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:34:03.684859   67000 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:34:03.711213   67000 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:34:03.711523   67000 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-190398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:34:03.722147   67000 kubeadm.go:310] [bootstrap-token] Using token: rpl4uv.hjs6pd4939cxws48
	I0815 01:34:00.548574   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:34:00.554825   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 200:
	ok
	I0815 01:34:00.556191   67451 api_server.go:141] control plane version: v1.31.0
	I0815 01:34:00.556215   67451 api_server.go:131] duration metric: took 3.910191173s to wait for apiserver health ...
	I0815 01:34:00.556225   67451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:34:00.556253   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:34:00.556316   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:34:00.603377   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:34:00.603404   67451 cri.go:89] found id: ""
	I0815 01:34:00.603413   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:34:00.603471   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.608674   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:34:00.608747   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:34:00.660318   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:34:00.660346   67451 cri.go:89] found id: ""
	I0815 01:34:00.660355   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:34:00.660450   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.664411   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:34:00.664483   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:34:00.710148   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:34:00.710178   67451 cri.go:89] found id: ""
	I0815 01:34:00.710188   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:34:00.710255   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.714877   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:34:00.714936   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:34:00.750324   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:34:00.750352   67451 cri.go:89] found id: ""
	I0815 01:34:00.750361   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:34:00.750423   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.754304   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:34:00.754377   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:34:00.797956   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:34:00.797980   67451 cri.go:89] found id: ""
	I0815 01:34:00.797989   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:34:00.798053   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.802260   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:34:00.802362   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:34:00.841502   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:34:00.841529   67451 cri.go:89] found id: ""
	I0815 01:34:00.841539   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:34:00.841599   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.845398   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:34:00.845454   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:34:00.882732   67451 cri.go:89] found id: ""
	I0815 01:34:00.882769   67451 logs.go:276] 0 containers: []
	W0815 01:34:00.882779   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:34:00.882786   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:34:00.882855   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:34:00.924913   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:34:00.924942   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:34:00.924948   67451 cri.go:89] found id: ""
	I0815 01:34:00.924958   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:34:00.925019   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.929047   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.932838   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:34:00.932862   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:34:00.975515   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:34:00.975544   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:34:01.041578   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:34:01.041616   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:34:01.083548   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:34:01.083584   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:34:01.181982   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:34:01.182028   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:34:01.197180   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:34:01.197222   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:34:01.296173   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:34:01.296215   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:34:01.348591   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:34:01.348621   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:34:01.385258   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:34:01.385290   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:34:01.760172   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:34:01.760228   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:34:01.811334   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:34:01.811371   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:34:01.855563   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:34:01.855602   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:34:01.891834   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:34:01.891871   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:34:04.440542   67451 system_pods.go:59] 8 kube-system pods found
	I0815 01:34:04.440582   67451 system_pods.go:61] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running
	I0815 01:34:04.440590   67451 system_pods.go:61] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running
	I0815 01:34:04.440596   67451 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running
	I0815 01:34:04.440602   67451 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running
	I0815 01:34:04.440607   67451 system_pods.go:61] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:34:04.440612   67451 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running
	I0815 01:34:04.440622   67451 system_pods.go:61] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:04.440627   67451 system_pods.go:61] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:34:04.440636   67451 system_pods.go:74] duration metric: took 3.884405315s to wait for pod list to return data ...
	I0815 01:34:04.440643   67451 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:34:04.443705   67451 default_sa.go:45] found service account: "default"
	I0815 01:34:04.443728   67451 default_sa.go:55] duration metric: took 3.078997ms for default service account to be created ...
	I0815 01:34:04.443736   67451 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:34:04.451338   67451 system_pods.go:86] 8 kube-system pods found
	I0815 01:34:04.451370   67451 system_pods.go:89] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running
	I0815 01:34:04.451379   67451 system_pods.go:89] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running
	I0815 01:34:04.451386   67451 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running
	I0815 01:34:04.451394   67451 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running
	I0815 01:34:04.451401   67451 system_pods.go:89] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:34:04.451408   67451 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running
	I0815 01:34:04.451419   67451 system_pods.go:89] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:04.451430   67451 system_pods.go:89] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:34:04.451443   67451 system_pods.go:126] duration metric: took 7.701241ms to wait for k8s-apps to be running ...
	I0815 01:34:04.451455   67451 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:34:04.451507   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:04.468766   67451 system_svc.go:56] duration metric: took 17.300221ms WaitForService to wait for kubelet
	I0815 01:34:04.468801   67451 kubeadm.go:582] duration metric: took 4m21.362801315s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:34:04.468832   67451 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:34:04.472507   67451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:34:04.472531   67451 node_conditions.go:123] node cpu capacity is 2
	I0815 01:34:04.472542   67451 node_conditions.go:105] duration metric: took 3.704147ms to run NodePressure ...
	I0815 01:34:04.472565   67451 start.go:241] waiting for startup goroutines ...
	I0815 01:34:04.472575   67451 start.go:246] waiting for cluster config update ...
	I0815 01:34:04.472588   67451 start.go:255] writing updated cluster config ...
	I0815 01:34:04.472865   67451 ssh_runner.go:195] Run: rm -f paused
	I0815 01:34:04.527726   67451 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:34:04.529173   67451 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-018537" cluster and "default" namespace by default
	I0815 01:34:03.723380   67000 out.go:204]   - Configuring RBAC rules ...
	I0815 01:34:03.723547   67000 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:34:03.729240   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:34:03.737279   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:34:03.740490   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:34:03.747717   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:34:03.751107   67000 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:34:04.063063   67000 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:34:04.490218   67000 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:34:05.062068   67000 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:34:05.065926   67000 kubeadm.go:310] 
	I0815 01:34:05.065991   67000 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:34:05.066017   67000 kubeadm.go:310] 
	I0815 01:34:05.066103   67000 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:34:05.066114   67000 kubeadm.go:310] 
	I0815 01:34:05.066148   67000 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:34:05.066211   67000 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:34:05.066286   67000 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:34:05.066298   67000 kubeadm.go:310] 
	I0815 01:34:05.066368   67000 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:34:05.066377   67000 kubeadm.go:310] 
	I0815 01:34:05.066416   67000 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:34:05.066423   67000 kubeadm.go:310] 
	I0815 01:34:05.066499   67000 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:34:05.066602   67000 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:34:05.066692   67000 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:34:05.066699   67000 kubeadm.go:310] 
	I0815 01:34:05.066766   67000 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:34:05.066829   67000 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:34:05.066835   67000 kubeadm.go:310] 
	I0815 01:34:05.066958   67000 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rpl4uv.hjs6pd4939cxws48 \
	I0815 01:34:05.067094   67000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:34:05.067122   67000 kubeadm.go:310] 	--control-plane 
	I0815 01:34:05.067130   67000 kubeadm.go:310] 
	I0815 01:34:05.067246   67000 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:34:05.067257   67000 kubeadm.go:310] 
	I0815 01:34:05.067360   67000 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rpl4uv.hjs6pd4939cxws48 \
	I0815 01:34:05.067496   67000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:34:05.068747   67000 kubeadm.go:310] W0815 01:33:56.716635    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:05.069045   67000 kubeadm.go:310] W0815 01:33:56.717863    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:05.069191   67000 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:05.069220   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:34:05.069231   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:34:05.070969   67000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:34:00.761976   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:03.263360   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:05.072063   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:34:05.081962   67000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:34:05.106105   67000 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:34:05.106173   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:05.106224   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-190398 minikube.k8s.io/updated_at=2024_08_15T01_34_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=embed-certs-190398 minikube.k8s.io/primary=true
	I0815 01:34:05.282543   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:05.282564   67000 ops.go:34] apiserver oom_adj: -16
	I0815 01:34:05.783320   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:06.282990   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:06.782692   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:07.283083   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:07.783174   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:08.283580   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:08.783293   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:09.282718   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:09.384394   67000 kubeadm.go:1113] duration metric: took 4.278268585s to wait for elevateKubeSystemPrivileges
	I0815 01:34:09.384433   67000 kubeadm.go:394] duration metric: took 4m57.749730888s to StartCluster
	I0815 01:34:09.384454   67000 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:09.384550   67000 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:34:09.386694   67000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:09.386961   67000 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:34:09.387019   67000 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:34:09.387099   67000 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-190398"
	I0815 01:34:09.387109   67000 addons.go:69] Setting default-storageclass=true in profile "embed-certs-190398"
	I0815 01:34:09.387133   67000 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-190398"
	I0815 01:34:09.387144   67000 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-190398"
	W0815 01:34:09.387147   67000 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:34:09.387165   67000 addons.go:69] Setting metrics-server=true in profile "embed-certs-190398"
	I0815 01:34:09.387178   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.387189   67000 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:34:09.387205   67000 addons.go:234] Setting addon metrics-server=true in "embed-certs-190398"
	W0815 01:34:09.387216   67000 addons.go:243] addon metrics-server should already be in state true
	I0815 01:34:09.387253   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.387571   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387601   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.387577   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387681   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387729   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.387799   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.388556   67000 out.go:177] * Verifying Kubernetes components...
	I0815 01:34:09.389872   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:34:09.404358   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0815 01:34:09.404925   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.405016   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0815 01:34:09.405505   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.405526   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.405530   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.405878   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.405982   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.405993   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.406352   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.406418   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I0815 01:34:09.406460   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.406477   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.406755   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.406839   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.406876   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.407171   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.407189   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.407518   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.407712   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.412572   67000 addons.go:234] Setting addon default-storageclass=true in "embed-certs-190398"
	W0815 01:34:09.412597   67000 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:34:09.412626   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.413018   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.413049   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.427598   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0815 01:34:09.428087   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.428619   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.428645   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.429079   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.429290   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.430391   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34763
	I0815 01:34:09.430978   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.431199   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.431477   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.431489   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.431839   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.431991   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.433073   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0815 01:34:09.433473   67000 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:34:09.433726   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.433849   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.434259   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.434433   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.434786   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.434987   67000 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:09.435005   67000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:34:09.435026   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.435675   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.435700   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.435887   67000 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:34:05.760130   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:07.760774   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:09.762245   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:09.437621   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:34:09.437643   67000 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:34:09.437664   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.438723   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.439409   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.439431   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.439685   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.439898   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.440245   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.440419   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.440609   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.441353   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.441380   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.441558   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.441712   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.441859   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.441957   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.455864   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
	I0815 01:34:09.456238   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.456858   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.456878   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.457179   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.457413   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.459002   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.459268   67000 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:09.459282   67000 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:34:09.459296   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.461784   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.462170   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.462203   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.462317   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.462491   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.462631   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.462772   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.602215   67000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:34:09.621687   67000 node_ready.go:35] waiting up to 6m0s for node "embed-certs-190398" to be "Ready" ...
	I0815 01:34:09.635114   67000 node_ready.go:49] node "embed-certs-190398" has status "Ready":"True"
	I0815 01:34:09.635146   67000 node_ready.go:38] duration metric: took 13.422205ms for node "embed-certs-190398" to be "Ready" ...
	I0815 01:34:09.635169   67000 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:09.642293   67000 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:09.681219   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:34:09.681242   67000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:34:09.725319   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:34:09.725353   67000 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:34:09.725445   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:09.758901   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:09.758973   67000 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:34:09.809707   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:09.831765   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:10.013580   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.013607   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.013902   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:10.013933   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.013950   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.013968   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.013979   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.014212   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.014227   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.023286   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.023325   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.023618   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.023643   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.023655   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.121834   67000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.312088989s)
	I0815 01:34:11.121883   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.121896   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.122269   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.122304   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.122324   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.122340   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.122354   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.122588   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.122605   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.183170   67000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.351356186s)
	I0815 01:34:11.183232   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.183248   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.183588   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.183604   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.183608   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.183619   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.183627   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.183989   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.184017   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.184031   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.184053   67000 addons.go:475] Verifying addon metrics-server=true in "embed-certs-190398"
	I0815 01:34:11.186460   67000 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0815 01:34:12.261636   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.763849   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:11.187572   67000 addons.go:510] duration metric: took 1.800554463s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0815 01:34:11.653997   67000 pod_ready.go:102] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.149672   67000 pod_ready.go:102] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.652753   67000 pod_ready.go:92] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:14.652782   67000 pod_ready.go:81] duration metric: took 5.0104594s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:14.652794   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:16.662387   67000 pod_ready.go:102] pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:17.158847   67000 pod_ready.go:92] pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.158877   67000 pod_ready.go:81] duration metric: took 2.50607523s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.158895   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.163274   67000 pod_ready.go:92] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.163295   67000 pod_ready.go:81] duration metric: took 4.392165ms for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.163307   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7hfvr" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.167416   67000 pod_ready.go:92] pod "kube-proxy-7hfvr" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.167436   67000 pod_ready.go:81] duration metric: took 4.122023ms for pod "kube-proxy-7hfvr" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.167447   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.171559   67000 pod_ready.go:92] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.171578   67000 pod_ready.go:81] duration metric: took 4.12361ms for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.171587   67000 pod_ready.go:38] duration metric: took 7.536405023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:17.171605   67000 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:34:17.171665   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:34:17.187336   67000 api_server.go:72] duration metric: took 7.800338922s to wait for apiserver process to appear ...
	I0815 01:34:17.187359   67000 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:34:17.187379   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:34:17.191804   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0815 01:34:17.192705   67000 api_server.go:141] control plane version: v1.31.0
	I0815 01:34:17.192726   67000 api_server.go:131] duration metric: took 5.35969ms to wait for apiserver health ...
	I0815 01:34:17.192739   67000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:34:17.197588   67000 system_pods.go:59] 9 kube-system pods found
	I0815 01:34:17.197618   67000 system_pods.go:61] "coredns-6f6b679f8f-kmmdc" [455019d9-07b5-418e-8668-26272424e96c] Running
	I0815 01:34:17.197626   67000 system_pods.go:61] "coredns-6f6b679f8f-kx2xv" [81e26858-a527-4f0d-a7fd-e5c3f82b29bc] Running
	I0815 01:34:17.197632   67000 system_pods.go:61] "etcd-embed-certs-190398" [0767f386-4cff-4c02-9c5c-ec334dd15d3d] Running
	I0815 01:34:17.197638   67000 system_pods.go:61] "kube-apiserver-embed-certs-190398" [737db54b-50eb-4fea-93a0-7e95d645b77f] Running
	I0815 01:34:17.197644   67000 system_pods.go:61] "kube-controller-manager-embed-certs-190398" [4767eb26-47a6-4dfd-833a-a4e18a57cb7e] Running
	I0815 01:34:17.197649   67000 system_pods.go:61] "kube-proxy-7hfvr" [ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0] Running
	I0815 01:34:17.197655   67000 system_pods.go:61] "kube-scheduler-embed-certs-190398" [0ffcf10e-304e-4837-bd6f-c3b78193b378] Running
	I0815 01:34:17.197665   67000 system_pods.go:61] "metrics-server-6867b74b74-4ldv7" [ea1c5492-373d-445c-a135-b91569186449] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:17.197676   67000 system_pods.go:61] "storage-provisioner" [002656ed-b542-442d-9409-6f0b5cf557dc] Running
	I0815 01:34:17.197688   67000 system_pods.go:74] duration metric: took 4.940904ms to wait for pod list to return data ...
	I0815 01:34:17.197699   67000 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:34:17.200172   67000 default_sa.go:45] found service account: "default"
	I0815 01:34:17.200190   67000 default_sa.go:55] duration metric: took 2.484111ms for default service account to be created ...
	I0815 01:34:17.200198   67000 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:34:17.359981   67000 system_pods.go:86] 9 kube-system pods found
	I0815 01:34:17.360011   67000 system_pods.go:89] "coredns-6f6b679f8f-kmmdc" [455019d9-07b5-418e-8668-26272424e96c] Running
	I0815 01:34:17.360019   67000 system_pods.go:89] "coredns-6f6b679f8f-kx2xv" [81e26858-a527-4f0d-a7fd-e5c3f82b29bc] Running
	I0815 01:34:17.360025   67000 system_pods.go:89] "etcd-embed-certs-190398" [0767f386-4cff-4c02-9c5c-ec334dd15d3d] Running
	I0815 01:34:17.360030   67000 system_pods.go:89] "kube-apiserver-embed-certs-190398" [737db54b-50eb-4fea-93a0-7e95d645b77f] Running
	I0815 01:34:17.360036   67000 system_pods.go:89] "kube-controller-manager-embed-certs-190398" [4767eb26-47a6-4dfd-833a-a4e18a57cb7e] Running
	I0815 01:34:17.360042   67000 system_pods.go:89] "kube-proxy-7hfvr" [ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0] Running
	I0815 01:34:17.360047   67000 system_pods.go:89] "kube-scheduler-embed-certs-190398" [0ffcf10e-304e-4837-bd6f-c3b78193b378] Running
	I0815 01:34:17.360058   67000 system_pods.go:89] "metrics-server-6867b74b74-4ldv7" [ea1c5492-373d-445c-a135-b91569186449] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:17.360065   67000 system_pods.go:89] "storage-provisioner" [002656ed-b542-442d-9409-6f0b5cf557dc] Running
	I0815 01:34:17.360078   67000 system_pods.go:126] duration metric: took 159.873802ms to wait for k8s-apps to be running ...
	I0815 01:34:17.360091   67000 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:34:17.360143   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:17.374912   67000 system_svc.go:56] duration metric: took 14.811351ms WaitForService to wait for kubelet
	I0815 01:34:17.374948   67000 kubeadm.go:582] duration metric: took 7.987952187s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:34:17.374977   67000 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:34:17.557650   67000 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:34:17.557681   67000 node_conditions.go:123] node cpu capacity is 2
	I0815 01:34:17.557694   67000 node_conditions.go:105] duration metric: took 182.710819ms to run NodePressure ...
	I0815 01:34:17.557706   67000 start.go:241] waiting for startup goroutines ...
	I0815 01:34:17.557716   67000 start.go:246] waiting for cluster config update ...
	I0815 01:34:17.557728   67000 start.go:255] writing updated cluster config ...
	I0815 01:34:17.557999   67000 ssh_runner.go:195] Run: rm -f paused
	I0815 01:34:17.605428   67000 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:34:17.607344   67000 out.go:177] * Done! kubectl is now configured to use "embed-certs-190398" cluster and "default" namespace by default
	I0815 01:34:17.260406   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:19.260601   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:19.754935   66492 pod_ready.go:81] duration metric: took 4m0.000339545s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" ...
	E0815 01:34:19.754964   66492 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 01:34:19.754984   66492 pod_ready.go:38] duration metric: took 4m6.506948914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:19.755018   66492 kubeadm.go:597] duration metric: took 4m13.922875877s to restartPrimaryControlPlane
	W0815 01:34:19.755082   66492 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:34:19.755112   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:34:45.859009   66492 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.103872856s)
	I0815 01:34:45.859088   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:45.875533   66492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:34:45.885287   66492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:34:45.897067   66492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:34:45.897087   66492 kubeadm.go:157] found existing configuration files:
	
	I0815 01:34:45.897137   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:34:45.907073   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:34:45.907145   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:34:45.916110   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:34:45.925269   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:34:45.925330   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:34:45.934177   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:34:45.942464   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:34:45.942524   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:34:45.951504   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:34:45.961107   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:34:45.961159   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:34:45.970505   66492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:34:46.018530   66492 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:34:46.018721   66492 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:46.125710   66492 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:46.125846   66492 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:46.125961   66492 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:34:46.134089   66492 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:46.135965   66492 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:46.136069   66492 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:46.136157   66492 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:46.136256   66492 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:46.136333   66492 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:46.136442   66492 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:46.136528   66492 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:46.136614   66492 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:46.136736   66492 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:46.136845   66492 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:46.136946   66492 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:46.137066   66492 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:46.137143   66492 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:46.289372   66492 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:46.547577   66492 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:34:46.679039   66492 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:47.039625   66492 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:47.355987   66492 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:47.356514   66492 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:47.359155   66492 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:47.360813   66492 out.go:204]   - Booting up control plane ...
	I0815 01:34:47.360924   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:47.361018   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:47.361140   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:47.386603   66492 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:47.395339   66492 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:47.395391   66492 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:47.526381   66492 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:34:47.526512   66492 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:34:48.027552   66492 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.152677ms
	I0815 01:34:48.027674   66492 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:34:53.029526   66492 kubeadm.go:310] [api-check] The API server is healthy after 5.001814093s
	I0815 01:34:53.043123   66492 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:34:53.061171   66492 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:34:53.093418   66492 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:34:53.093680   66492 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-884893 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:34:53.106103   66492 kubeadm.go:310] [bootstrap-token] Using token: rd520d.rc6325cjita43il4
	I0815 01:34:53.107576   66492 out.go:204]   - Configuring RBAC rules ...
	I0815 01:34:53.107717   66492 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:34:53.112060   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:34:53.122816   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:34:53.126197   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:34:53.129304   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:34:53.133101   66492 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:34:53.436427   66492 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:34:53.891110   66492 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:34:54.439955   66492 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:34:54.441369   66492 kubeadm.go:310] 
	I0815 01:34:54.441448   66492 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:34:54.441457   66492 kubeadm.go:310] 
	I0815 01:34:54.441550   66492 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:34:54.441578   66492 kubeadm.go:310] 
	I0815 01:34:54.441608   66492 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:34:54.441663   66492 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:34:54.441705   66492 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:34:54.441711   66492 kubeadm.go:310] 
	I0815 01:34:54.441777   66492 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:34:54.441784   66492 kubeadm.go:310] 
	I0815 01:34:54.441821   66492 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:34:54.441828   66492 kubeadm.go:310] 
	I0815 01:34:54.441867   66492 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:34:54.441977   66492 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:34:54.442054   66492 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:34:54.442061   66492 kubeadm.go:310] 
	I0815 01:34:54.442149   66492 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:34:54.442255   66492 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:34:54.442265   66492 kubeadm.go:310] 
	I0815 01:34:54.442384   66492 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rd520d.rc6325cjita43il4 \
	I0815 01:34:54.442477   66492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:34:54.442504   66492 kubeadm.go:310] 	--control-plane 
	I0815 01:34:54.442509   66492 kubeadm.go:310] 
	I0815 01:34:54.442591   66492 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:34:54.442598   66492 kubeadm.go:310] 
	I0815 01:34:54.442675   66492 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rd520d.rc6325cjita43il4 \
	I0815 01:34:54.442811   66492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:34:54.444409   66492 kubeadm.go:310] W0815 01:34:45.989583    3035 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:54.444785   66492 kubeadm.go:310] W0815 01:34:45.990491    3035 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:54.444929   66492 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:54.444951   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:34:54.444960   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:34:54.447029   66492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:34:54.448357   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:34:54.460176   66492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:34:54.479219   66492 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:34:54.479299   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:54.479342   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-884893 minikube.k8s.io/updated_at=2024_08_15T01_34_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=no-preload-884893 minikube.k8s.io/primary=true
	I0815 01:34:54.516528   66492 ops.go:34] apiserver oom_adj: -16
	I0815 01:34:54.686689   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:55.186918   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:55.687118   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:56.186740   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:56.687051   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:57.187582   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:57.687662   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:58.187633   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:58.686885   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:59.187093   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:59.280930   66492 kubeadm.go:1113] duration metric: took 4.801695567s to wait for elevateKubeSystemPrivileges
	I0815 01:34:59.280969   66492 kubeadm.go:394] duration metric: took 4m53.494095639s to StartCluster
	I0815 01:34:59.281006   66492 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:59.281099   66492 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:34:59.283217   66492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:59.283528   66492 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:34:59.283693   66492 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:34:59.283649   66492 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:34:59.283734   66492 addons.go:69] Setting storage-provisioner=true in profile "no-preload-884893"
	I0815 01:34:59.283743   66492 addons.go:69] Setting metrics-server=true in profile "no-preload-884893"
	I0815 01:34:59.283742   66492 addons.go:69] Setting default-storageclass=true in profile "no-preload-884893"
	I0815 01:34:59.283768   66492 addons.go:234] Setting addon metrics-server=true in "no-preload-884893"
	I0815 01:34:59.283770   66492 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-884893"
	I0815 01:34:59.283768   66492 addons.go:234] Setting addon storage-provisioner=true in "no-preload-884893"
	W0815 01:34:59.283882   66492 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:34:59.283912   66492 host.go:66] Checking if "no-preload-884893" exists ...
	W0815 01:34:59.283778   66492 addons.go:243] addon metrics-server should already be in state true
	I0815 01:34:59.283990   66492 host.go:66] Checking if "no-preload-884893" exists ...
	I0815 01:34:59.284206   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284238   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.284296   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284321   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.284333   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284347   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.285008   66492 out.go:177] * Verifying Kubernetes components...
	I0815 01:34:59.286336   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:34:59.302646   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I0815 01:34:59.302810   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0815 01:34:59.303084   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303243   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303327   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0815 01:34:59.303705   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.303724   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.303864   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303911   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.303939   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.304044   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304378   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.304397   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.304418   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304643   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.304695   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.304899   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.304912   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304926   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.305098   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.308826   66492 addons.go:234] Setting addon default-storageclass=true in "no-preload-884893"
	W0815 01:34:59.308848   66492 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:34:59.308878   66492 host.go:66] Checking if "no-preload-884893" exists ...
	I0815 01:34:59.309223   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.309255   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.320605   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0815 01:34:59.321021   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.321570   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.321591   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.321942   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.322163   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.323439   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0815 01:34:59.323779   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.324027   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.324168   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.324180   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.324446   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.324885   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.324914   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.325881   66492 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:34:59.326695   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0815 01:34:59.327054   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.327257   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:34:59.327286   66492 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:34:59.327304   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.327551   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.327567   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.327935   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.328243   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.330384   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.330975   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.331491   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.331519   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.331747   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.331916   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.331916   66492 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:34:59.563745   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:34:59.563904   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:34:59.565631   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:34:59.565711   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:59.565827   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:59.565968   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:59.566095   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:34:59.566195   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:59.567850   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:59.567922   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:59.567991   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:59.568091   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:59.568176   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:59.568283   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:59.568377   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:59.568466   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:59.568558   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:59.568674   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:59.568775   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:59.568834   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:59.568920   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:59.568998   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:59.569073   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:59.569162   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:59.569217   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:59.569330   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:59.569429   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:59.569482   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:59.569580   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:59.571031   66919 out.go:204]   - Booting up control plane ...
	I0815 01:34:59.571120   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:59.571198   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:59.571286   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:59.571396   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:59.571643   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:34:59.571729   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:34:59.571830   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572069   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572172   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572422   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572540   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572814   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572913   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573155   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573252   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573474   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573484   66919 kubeadm.go:310] 
	I0815 01:34:59.573543   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:34:59.573601   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:34:59.573610   66919 kubeadm.go:310] 
	I0815 01:34:59.573667   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:34:59.573713   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:34:59.573862   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:34:59.573878   66919 kubeadm.go:310] 
	I0815 01:34:59.574000   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:34:59.574051   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:34:59.574099   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:34:59.574109   66919 kubeadm.go:310] 
	I0815 01:34:59.574262   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:34:59.574379   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:34:59.574387   66919 kubeadm.go:310] 
	I0815 01:34:59.574509   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:34:59.574646   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:34:59.574760   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:34:59.574862   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:34:59.574880   66919 kubeadm.go:310] 
	W0815 01:34:59.574991   66919 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 01:34:59.575044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:35:00.029701   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:00.047125   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:35:00.057309   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:35:00.057336   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:35:00.057396   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:35:00.066837   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:35:00.066901   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:35:00.076722   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:35:00.086798   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:35:00.086862   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:35:00.097486   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.109900   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:35:00.109981   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.122672   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:34:59.332080   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.332258   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.333212   66492 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:59.333230   66492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:34:59.333246   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.336201   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.336699   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.336761   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.336791   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.336965   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.337146   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.337319   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.343978   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42433
	I0815 01:34:59.344425   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.344992   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.345015   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.345400   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.345595   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.347262   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.347490   66492 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:59.347507   66492 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:34:59.347525   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.350390   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.350876   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.350899   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.351072   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.351243   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.351418   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.351543   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.471077   66492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:34:59.500097   66492 node_ready.go:35] waiting up to 6m0s for node "no-preload-884893" to be "Ready" ...
	I0815 01:34:59.509040   66492 node_ready.go:49] node "no-preload-884893" has status "Ready":"True"
	I0815 01:34:59.509063   66492 node_ready.go:38] duration metric: took 8.924177ms for node "no-preload-884893" to be "Ready" ...
	I0815 01:34:59.509075   66492 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:59.515979   66492 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:59.594834   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:34:59.594856   66492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:34:59.597457   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:59.603544   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:59.637080   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:34:59.637109   66492 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:34:59.683359   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:59.683388   66492 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:34:59.730096   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:35:00.403252   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403287   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403477   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403495   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403789   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.403829   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.403850   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403858   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.403868   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403876   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.403891   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403900   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.404115   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.404156   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.404158   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.404162   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.404177   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.404164   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.433823   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.433876   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.434285   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.434398   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.434420   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.674979   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.675008   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.675371   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.675395   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.675421   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.675434   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.675443   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.675706   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.675722   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.675733   66492 addons.go:475] Verifying addon metrics-server=true in "no-preload-884893"
	I0815 01:35:00.677025   66492 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 01:35:00.134512   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:35:00.134579   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:35:00.146901   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:35:00.384725   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:35:00.678492   66492 addons.go:510] duration metric: took 1.394848534s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 01:35:01.522738   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:04.022711   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:06.522906   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:08.523426   66492 pod_ready.go:92] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.523453   66492 pod_ready.go:81] duration metric: took 9.007444319s for pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.523465   66492 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.528447   66492 pod_ready.go:92] pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.528471   66492 pod_ready.go:81] duration metric: took 4.997645ms for pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.528480   66492 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.533058   66492 pod_ready.go:92] pod "etcd-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.533078   66492 pod_ready.go:81] duration metric: took 4.59242ms for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.533088   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.537231   66492 pod_ready.go:92] pod "kube-apiserver-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.537252   66492 pod_ready.go:81] duration metric: took 4.154988ms for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.537261   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.541819   66492 pod_ready.go:92] pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.541840   66492 pod_ready.go:81] duration metric: took 4.572636ms for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.541852   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dpggv" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.920356   66492 pod_ready.go:92] pod "kube-proxy-dpggv" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.920394   66492 pod_ready.go:81] duration metric: took 378.534331ms for pod "kube-proxy-dpggv" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.920407   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:09.320112   66492 pod_ready.go:92] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:09.320135   66492 pod_ready.go:81] duration metric: took 399.72085ms for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:09.320143   66492 pod_ready.go:38] duration metric: took 9.811056504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:35:09.320158   66492 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:35:09.320216   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:35:09.336727   66492 api_server.go:72] duration metric: took 10.053160882s to wait for apiserver process to appear ...
	I0815 01:35:09.336760   66492 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:35:09.336777   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:35:09.340897   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 200:
	ok
	I0815 01:35:09.341891   66492 api_server.go:141] control plane version: v1.31.0
	I0815 01:35:09.341911   66492 api_server.go:131] duration metric: took 5.145922ms to wait for apiserver health ...
	I0815 01:35:09.341919   66492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:35:09.523808   66492 system_pods.go:59] 9 kube-system pods found
	I0815 01:35:09.523839   66492 system_pods.go:61] "coredns-6f6b679f8f-srq48" [e9520ab8-24d6-410d-bcba-b59e91e817a9] Running
	I0815 01:35:09.523844   66492 system_pods.go:61] "coredns-6f6b679f8f-t77b6" [fcdf11ef-28a6-428c-b033-e29b51af8f0e] Running
	I0815 01:35:09.523848   66492 system_pods.go:61] "etcd-no-preload-884893" [fa960cfe-331d-4656-93e9-a58921bd62de] Running
	I0815 01:35:09.523851   66492 system_pods.go:61] "kube-apiserver-no-preload-884893" [7a8244fb-aa58-4e8e-957a-f3fbd388837b] Running
	I0815 01:35:09.523857   66492 system_pods.go:61] "kube-controller-manager-no-preload-884893" [0b6c5424-6fe4-42b6-b081-4409f90db35f] Running
	I0815 01:35:09.523860   66492 system_pods.go:61] "kube-proxy-dpggv" [55ef2a4b-a502-452d-a3bd-df1209ff247b] Running
	I0815 01:35:09.523863   66492 system_pods.go:61] "kube-scheduler-no-preload-884893" [cd295ee0-1897-4cd3-896d-09dd36842248] Running
	I0815 01:35:09.523871   66492 system_pods.go:61] "metrics-server-6867b74b74-w47b2" [7423be62-ae01-4b3f-9e24-049f4788f32f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:35:09.523875   66492 system_pods.go:61] "storage-provisioner" [b4cf6d02-281f-4fb5-9ff7-c36143d3af58] Running
	I0815 01:35:09.523883   66492 system_pods.go:74] duration metric: took 181.959474ms to wait for pod list to return data ...
	I0815 01:35:09.523892   66492 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:35:09.720531   66492 default_sa.go:45] found service account: "default"
	I0815 01:35:09.720565   66492 default_sa.go:55] duration metric: took 196.667806ms for default service account to be created ...
	I0815 01:35:09.720574   66492 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:35:09.923419   66492 system_pods.go:86] 9 kube-system pods found
	I0815 01:35:09.923454   66492 system_pods.go:89] "coredns-6f6b679f8f-srq48" [e9520ab8-24d6-410d-bcba-b59e91e817a9] Running
	I0815 01:35:09.923463   66492 system_pods.go:89] "coredns-6f6b679f8f-t77b6" [fcdf11ef-28a6-428c-b033-e29b51af8f0e] Running
	I0815 01:35:09.923471   66492 system_pods.go:89] "etcd-no-preload-884893" [fa960cfe-331d-4656-93e9-a58921bd62de] Running
	I0815 01:35:09.923477   66492 system_pods.go:89] "kube-apiserver-no-preload-884893" [7a8244fb-aa58-4e8e-957a-f3fbd388837b] Running
	I0815 01:35:09.923484   66492 system_pods.go:89] "kube-controller-manager-no-preload-884893" [0b6c5424-6fe4-42b6-b081-4409f90db35f] Running
	I0815 01:35:09.923490   66492 system_pods.go:89] "kube-proxy-dpggv" [55ef2a4b-a502-452d-a3bd-df1209ff247b] Running
	I0815 01:35:09.923494   66492 system_pods.go:89] "kube-scheduler-no-preload-884893" [cd295ee0-1897-4cd3-896d-09dd36842248] Running
	I0815 01:35:09.923502   66492 system_pods.go:89] "metrics-server-6867b74b74-w47b2" [7423be62-ae01-4b3f-9e24-049f4788f32f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:35:09.923509   66492 system_pods.go:89] "storage-provisioner" [b4cf6d02-281f-4fb5-9ff7-c36143d3af58] Running
	I0815 01:35:09.923524   66492 system_pods.go:126] duration metric: took 202.943928ms to wait for k8s-apps to be running ...
	I0815 01:35:09.923533   66492 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:35:09.923586   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:09.938893   66492 system_svc.go:56] duration metric: took 15.353021ms WaitForService to wait for kubelet
	I0815 01:35:09.938917   66492 kubeadm.go:582] duration metric: took 10.655355721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:35:09.938942   66492 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:35:10.120692   66492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:35:10.120717   66492 node_conditions.go:123] node cpu capacity is 2
	I0815 01:35:10.120728   66492 node_conditions.go:105] duration metric: took 181.7794ms to run NodePressure ...
	I0815 01:35:10.120739   66492 start.go:241] waiting for startup goroutines ...
	I0815 01:35:10.120746   66492 start.go:246] waiting for cluster config update ...
	I0815 01:35:10.120754   66492 start.go:255] writing updated cluster config ...
	I0815 01:35:10.121019   66492 ssh_runner.go:195] Run: rm -f paused
	I0815 01:35:10.172726   66492 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:35:10.174631   66492 out.go:177] * Done! kubectl is now configured to use "no-preload-884893" cluster and "default" namespace by default
	I0815 01:36:56.608471   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:36:56.608611   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:36:56.610133   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:36:56.610200   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:36:56.610290   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:36:56.610405   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:36:56.610524   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:36:56.610616   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:36:56.612092   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:36:56.612184   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:36:56.612246   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:36:56.612314   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:36:56.612371   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:36:56.612431   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:36:56.612482   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:36:56.612534   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:36:56.612585   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:36:56.612697   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:36:56.612796   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:36:56.612859   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:36:56.613044   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:36:56.613112   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:36:56.613157   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:36:56.613244   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:36:56.613322   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:36:56.613455   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:36:56.613565   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:36:56.613631   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:36:56.613729   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:36:56.615023   66919 out.go:204]   - Booting up control plane ...
	I0815 01:36:56.615129   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:36:56.615203   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:36:56.615260   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:36:56.615330   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:36:56.615485   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:36:56.615542   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:36:56.615620   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.615805   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.615892   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616085   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616149   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616297   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616355   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616555   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616646   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616833   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616842   66919 kubeadm.go:310] 
	I0815 01:36:56.616873   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:36:56.616905   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:36:56.616912   66919 kubeadm.go:310] 
	I0815 01:36:56.616939   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:36:56.616969   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:36:56.617073   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:36:56.617089   66919 kubeadm.go:310] 
	I0815 01:36:56.617192   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:36:56.617220   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:36:56.617255   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:36:56.617263   66919 kubeadm.go:310] 
	I0815 01:36:56.617393   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:36:56.617469   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:36:56.617478   66919 kubeadm.go:310] 
	I0815 01:36:56.617756   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:36:56.617889   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:36:56.617967   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:36:56.618057   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:36:56.618070   66919 kubeadm.go:310] 
	I0815 01:36:56.618125   66919 kubeadm.go:394] duration metric: took 8m2.571608887s to StartCluster
	I0815 01:36:56.618169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:36:56.618222   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:36:56.659324   66919 cri.go:89] found id: ""
	I0815 01:36:56.659353   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.659365   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:36:56.659372   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:36:56.659443   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:36:56.695979   66919 cri.go:89] found id: ""
	I0815 01:36:56.696003   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.696010   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:36:56.696015   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:36:56.696063   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:36:56.730063   66919 cri.go:89] found id: ""
	I0815 01:36:56.730092   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.730100   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:36:56.730106   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:36:56.730161   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:36:56.763944   66919 cri.go:89] found id: ""
	I0815 01:36:56.763969   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.763983   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:36:56.763988   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:36:56.764047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:36:56.798270   66919 cri.go:89] found id: ""
	I0815 01:36:56.798299   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.798307   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:36:56.798313   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:36:56.798366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:36:56.832286   66919 cri.go:89] found id: ""
	I0815 01:36:56.832318   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.832328   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:36:56.832335   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:36:56.832410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:36:56.866344   66919 cri.go:89] found id: ""
	I0815 01:36:56.866380   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.866390   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:36:56.866398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:36:56.866461   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:36:56.904339   66919 cri.go:89] found id: ""
	I0815 01:36:56.904366   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.904375   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:36:56.904387   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:36:56.904405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:36:56.982024   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:36:56.982045   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:36:56.982057   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:36:57.092250   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:36:57.092288   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:36:57.157548   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:36:57.157582   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:36:57.216511   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:36:57.216563   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 01:36:57.230210   66919 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 01:36:57.230256   66919 out.go:239] * 
	W0815 01:36:57.230316   66919 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.230347   66919 out.go:239] * 
	W0815 01:36:57.231157   66919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:36:57.234003   66919 out.go:177] 
	W0815 01:36:57.235088   66919 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.235127   66919 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 01:36:57.235146   66919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 01:36:57.236647   66919 out.go:177] 
	
	
	==> CRI-O <==
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.146168528Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723685819146147684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3201f63-c40f-4e23-bd95-fdc92b23699e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.146756683Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67fd158d-c1d0-47c7-9d40-e9acd46f226f name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.146806376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67fd158d-c1d0-47c7-9d40-e9acd46f226f name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.146845956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=67fd158d-c1d0-47c7-9d40-e9acd46f226f name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.182416143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5271e60-eb51-4772-9219-62a88b386f55 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.182541530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5271e60-eb51-4772-9219-62a88b386f55 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.183995294Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9251eab1-806b-4a9c-b77f-ed81746e9926 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.184349487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723685819184321658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9251eab1-806b-4a9c-b77f-ed81746e9926 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.185101158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=374e34b6-a50a-4146-8838-3b8a6e98b3f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.185150597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=374e34b6-a50a-4146-8838-3b8a6e98b3f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.185183861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=374e34b6-a50a-4146-8838-3b8a6e98b3f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.218524762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=100d664a-fd73-4ce6-9e89-61e0a834521d name=/runtime.v1.RuntimeService/Version
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.218601919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=100d664a-fd73-4ce6-9e89-61e0a834521d name=/runtime.v1.RuntimeService/Version
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.220016507Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f36d79e8-01bf-4edb-8d93-74a5e1ece060 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.220384407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723685819220350227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f36d79e8-01bf-4edb-8d93-74a5e1ece060 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.221066610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2be0e7b2-0620-4469-be70-ce6454f82f64 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.221118771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2be0e7b2-0620-4469-be70-ce6454f82f64 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.221147679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2be0e7b2-0620-4469-be70-ce6454f82f64 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.253868763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb2aad37-d6e4-4df0-b19d-7b46aa3fdf67 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.253937390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb2aad37-d6e4-4df0-b19d-7b46aa3fdf67 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.255017484Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eba77271-78e1-4868-91f6-212d8972c642 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.255412334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723685819255387228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eba77271-78e1-4868-91f6-212d8972c642 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.256052938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e32d271-8aa0-452b-873c-8f0a9c527af6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.256099482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e32d271-8aa0-452b-873c-8f0a9c527af6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:36:59 old-k8s-version-390782 crio[654]: time="2024-08-15 01:36:59.256129213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9e32d271-8aa0-452b-873c-8f0a9c527af6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug15 01:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050416] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037789] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.678929] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.857055] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.487001] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.860898] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.063147] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057764] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.185464] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.131345] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.258818] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +5.930800] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.065041] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.685778] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	[Aug15 01:29] kauditd_printk_skb: 46 callbacks suppressed
	[Aug15 01:33] systemd-fstab-generator[5155]: Ignoring "noauto" option for root device
	[Aug15 01:35] systemd-fstab-generator[5437]: Ignoring "noauto" option for root device
	[  +0.071528] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:36:59 up 8 min,  0 users,  load average: 0.02, 0.10, 0.07
	Linux old-k8s-version-390782 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000876ef0)
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bb9ef0, 0x4f0ac20, 0xc0004d3ae0, 0x1, 0xc0001020c0)
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002547e0, 0xc0001020c0)
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0006a3280, 0xc0009ad700)
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 15 01:36:56 old-k8s-version-390782 kubelet[5614]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 15 01:36:56 old-k8s-version-390782 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 15 01:36:56 old-k8s-version-390782 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 15 01:36:57 old-k8s-version-390782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 15 01:36:57 old-k8s-version-390782 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 15 01:36:57 old-k8s-version-390782 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 15 01:36:57 old-k8s-version-390782 kubelet[5666]: I0815 01:36:57.163045    5666 server.go:416] Version: v1.20.0
	Aug 15 01:36:57 old-k8s-version-390782 kubelet[5666]: I0815 01:36:57.163322    5666 server.go:837] Client rotation is on, will bootstrap in background
	Aug 15 01:36:57 old-k8s-version-390782 kubelet[5666]: I0815 01:36:57.165286    5666 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 15 01:36:57 old-k8s-version-390782 kubelet[5666]: W0815 01:36:57.167141    5666 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 15 01:36:57 old-k8s-version-390782 kubelet[5666]: I0815 01:36:57.167221    5666 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-390782 -n old-k8s-version-390782
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 2 (224.922768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-390782" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (750.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537: exit status 3 (3.167891126s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:25:55.873067   67341 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.223:22: connect: no route to host
	E0815 01:25:55.873088   67341 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.223:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-018537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-018537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151897427s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.223:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-018537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537: exit status 3 (3.064121521s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 01:26:05.089044   67405 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.223:22: connect: no route to host
	E0815 01:26:05.089089   67405 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.223:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-018537" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-15 01:43:05.054754054 +0000 UTC m=+5859.754985658
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-018537 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-018537 logs -n 25: (1.933605831s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:20 UTC |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-884893             | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-294760 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | disable-driver-mounts-294760                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:23 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-190398            | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-390782        | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018537  | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-884893                  | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-190398                 | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-390782             | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018537       | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:26 UTC | 15 Aug 24 01:34 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:26:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:26:05.128952   67451 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:26:05.129201   67451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:26:05.129210   67451 out.go:304] Setting ErrFile to fd 2...
	I0815 01:26:05.129214   67451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:26:05.129371   67451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:26:05.129877   67451 out.go:298] Setting JSON to false
	I0815 01:26:05.130775   67451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7710,"bootTime":1723677455,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:26:05.130828   67451 start.go:139] virtualization: kvm guest
	I0815 01:26:05.133200   67451 out.go:177] * [default-k8s-diff-port-018537] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:26:05.134520   67451 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:26:05.134534   67451 notify.go:220] Checking for updates...
	I0815 01:26:05.136725   67451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:26:05.137871   67451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:26:05.138973   67451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:26:05.140126   67451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:26:05.141168   67451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:26:05.142477   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:26:05.142872   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:26:05.142931   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:26:05.157398   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0815 01:26:05.157792   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:26:05.158237   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:26:05.158271   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:26:05.158625   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:26:05.158791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:26:05.158998   67451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:26:05.159268   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:26:05.159298   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:26:05.173332   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0815 01:26:05.173671   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:26:05.174063   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:26:05.174085   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:26:05.174378   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:26:05.174558   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:26:05.209931   67451 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 01:26:04.417005   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:05.210993   67451 start.go:297] selected driver: kvm2
	I0815 01:26:05.211005   67451 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:05.211106   67451 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:26:05.211778   67451 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:26:05.211854   67451 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:26:05.226770   67451 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:26:05.227141   67451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:26:05.227174   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:26:05.227182   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:26:05.227228   67451 start.go:340] cluster config:
	{Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:05.227335   67451 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:26:05.228866   67451 out.go:177] * Starting "default-k8s-diff-port-018537" primary control-plane node in "default-k8s-diff-port-018537" cluster
	I0815 01:26:05.229784   67451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:26:05.229818   67451 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:26:05.229826   67451 cache.go:56] Caching tarball of preloaded images
	I0815 01:26:05.229905   67451 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:26:05.229916   67451 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:26:05.230017   67451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:26:05.230223   67451 start.go:360] acquireMachinesLock for default-k8s-diff-port-018537: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:26:07.488887   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:13.568939   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:16.640954   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:22.720929   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:25.792889   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:31.872926   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:34.944895   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:41.024886   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:44.096913   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:50.176957   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:53.249017   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:59.328928   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:02.400891   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:08.480935   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:11.552904   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:17.632939   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:20.704876   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:26.784922   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:29.856958   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:35.936895   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:39.008957   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:45.088962   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:48.160964   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:54.240971   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:57.312935   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:03.393014   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:06.464973   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:12.544928   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:15.616915   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:21.696904   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:24.768924   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:27.773197   66919 start.go:364] duration metric: took 3m57.538488178s to acquireMachinesLock for "old-k8s-version-390782"
	I0815 01:28:27.773249   66919 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:27.773269   66919 fix.go:54] fixHost starting: 
	I0815 01:28:27.773597   66919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:27.773632   66919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:27.788757   66919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0815 01:28:27.789155   66919 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:27.789612   66919 main.go:141] libmachine: Using API Version  1
	I0815 01:28:27.789645   66919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:27.789952   66919 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:27.790122   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:27.790265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetState
	I0815 01:28:27.791742   66919 fix.go:112] recreateIfNeeded on old-k8s-version-390782: state=Stopped err=<nil>
	I0815 01:28:27.791773   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	W0815 01:28:27.791930   66919 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:27.793654   66919 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-390782" ...
	I0815 01:28:27.794650   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .Start
	I0815 01:28:27.794798   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring networks are active...
	I0815 01:28:27.795554   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network default is active
	I0815 01:28:27.795835   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network mk-old-k8s-version-390782 is active
	I0815 01:28:27.796194   66919 main.go:141] libmachine: (old-k8s-version-390782) Getting domain xml...
	I0815 01:28:27.797069   66919 main.go:141] libmachine: (old-k8s-version-390782) Creating domain...
	I0815 01:28:28.999562   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting to get IP...
	I0815 01:28:29.000288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.000697   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.000787   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.000698   67979 retry.go:31] will retry after 209.337031ms: waiting for machine to come up
	I0815 01:28:29.212345   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.212839   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.212865   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.212796   67979 retry.go:31] will retry after 252.542067ms: waiting for machine to come up
	I0815 01:28:29.467274   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.467659   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.467685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.467607   67979 retry.go:31] will retry after 412.932146ms: waiting for machine to come up
	I0815 01:28:29.882217   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.882643   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.882672   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.882601   67979 retry.go:31] will retry after 526.991017ms: waiting for machine to come up
	I0815 01:28:27.770766   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:27.770800   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:28:27.771142   66492 buildroot.go:166] provisioning hostname "no-preload-884893"
	I0815 01:28:27.771173   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:28:27.771381   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:28:27.773059   66492 machine.go:97] duration metric: took 4m37.432079731s to provisionDockerMachine
	I0815 01:28:27.773102   66492 fix.go:56] duration metric: took 4m37.453608342s for fixHost
	I0815 01:28:27.773107   66492 start.go:83] releasing machines lock for "no-preload-884893", held for 4m37.453640668s
	W0815 01:28:27.773125   66492 start.go:714] error starting host: provision: host is not running
	W0815 01:28:27.773209   66492 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0815 01:28:27.773219   66492 start.go:729] Will try again in 5 seconds ...
	I0815 01:28:30.411443   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:30.411819   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:30.411881   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:30.411794   67979 retry.go:31] will retry after 758.953861ms: waiting for machine to come up
	I0815 01:28:31.172721   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.173099   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.173131   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.173045   67979 retry.go:31] will retry after 607.740613ms: waiting for machine to come up
	I0815 01:28:31.782922   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.783406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.783434   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.783343   67979 retry.go:31] will retry after 738.160606ms: waiting for machine to come up
	I0815 01:28:32.523257   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:32.523685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:32.523716   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:32.523625   67979 retry.go:31] will retry after 904.54249ms: waiting for machine to come up
	I0815 01:28:33.430286   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:33.430690   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:33.430722   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:33.430637   67979 retry.go:31] will retry after 1.55058959s: waiting for machine to come up
	I0815 01:28:34.983386   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:34.983838   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:34.983870   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:34.983788   67979 retry.go:31] will retry after 1.636768205s: waiting for machine to come up
	I0815 01:28:32.775084   66492 start.go:360] acquireMachinesLock for no-preload-884893: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:28:36.622595   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:36.623058   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:36.623083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:36.622994   67979 retry.go:31] will retry after 1.777197126s: waiting for machine to come up
	I0815 01:28:38.401812   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:38.402289   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:38.402319   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:38.402247   67979 retry.go:31] will retry after 3.186960364s: waiting for machine to come up
	I0815 01:28:41.592635   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:41.593067   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:41.593093   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:41.593018   67979 retry.go:31] will retry after 3.613524245s: waiting for machine to come up
	I0815 01:28:46.469326   67000 start.go:364] duration metric: took 4m10.840663216s to acquireMachinesLock for "embed-certs-190398"
	I0815 01:28:46.469405   67000 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:46.469425   67000 fix.go:54] fixHost starting: 
	I0815 01:28:46.469913   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:46.469951   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:46.486446   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0815 01:28:46.486871   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:46.487456   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:28:46.487491   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:46.487832   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:46.488037   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:28:46.488198   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:28:46.489804   67000 fix.go:112] recreateIfNeeded on embed-certs-190398: state=Stopped err=<nil>
	I0815 01:28:46.489863   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	W0815 01:28:46.490033   67000 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:46.492240   67000 out.go:177] * Restarting existing kvm2 VM for "embed-certs-190398" ...
	I0815 01:28:45.209122   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.209617   66919 main.go:141] libmachine: (old-k8s-version-390782) Found IP for machine: 192.168.50.21
	I0815 01:28:45.209639   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserving static IP address...
	I0815 01:28:45.209657   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has current primary IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.210115   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserved static IP address: 192.168.50.21
	I0815 01:28:45.210138   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting for SSH to be available...
	I0815 01:28:45.210160   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.210188   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | skip adding static IP to network mk-old-k8s-version-390782 - found existing host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"}
	I0815 01:28:45.210204   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Getting to WaitForSSH function...
	I0815 01:28:45.212727   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213127   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.213153   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213307   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH client type: external
	I0815 01:28:45.213354   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa (-rw-------)
	I0815 01:28:45.213388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:28:45.213406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | About to run SSH command:
	I0815 01:28:45.213437   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | exit 0
	I0815 01:28:45.340616   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | SSH cmd err, output: <nil>: 
	I0815 01:28:45.341118   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetConfigRaw
	I0815 01:28:45.341848   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.344534   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.344934   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.344967   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.345196   66919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/config.json ...
	I0815 01:28:45.345414   66919 machine.go:94] provisionDockerMachine start ...
	I0815 01:28:45.345433   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:45.345699   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.347935   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348249   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.348278   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348438   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.348609   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348797   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348957   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.349117   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.349324   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.349337   66919 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:28:45.456668   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:28:45.456701   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.456959   66919 buildroot.go:166] provisioning hostname "old-k8s-version-390782"
	I0815 01:28:45.456987   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.457148   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.460083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460425   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.460453   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460613   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.460783   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.460924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.461039   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.461180   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.461392   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.461416   66919 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-390782 && echo "old-k8s-version-390782" | sudo tee /etc/hostname
	I0815 01:28:45.582108   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-390782
	
	I0815 01:28:45.582136   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.585173   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585556   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.585590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585795   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.585989   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586131   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586253   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.586445   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.586648   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.586667   66919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-390782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-390782/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-390782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:28:45.700737   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:45.700778   66919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:28:45.700802   66919 buildroot.go:174] setting up certificates
	I0815 01:28:45.700812   66919 provision.go:84] configureAuth start
	I0815 01:28:45.700821   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.701079   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.704006   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704384   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.704416   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704593   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.706737   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707018   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.707041   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707213   66919 provision.go:143] copyHostCerts
	I0815 01:28:45.707299   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:28:45.707324   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:28:45.707408   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:28:45.707528   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:28:45.707537   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:28:45.707576   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:28:45.707657   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:28:45.707666   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:28:45.707701   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:28:45.707771   66919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-390782 san=[127.0.0.1 192.168.50.21 localhost minikube old-k8s-version-390782]
	I0815 01:28:45.787190   66919 provision.go:177] copyRemoteCerts
	I0815 01:28:45.787256   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:28:45.787287   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.790159   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790542   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.790590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790735   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.790924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.791097   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.791217   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:45.874561   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:28:45.897869   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 01:28:45.923862   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:28:45.950038   66919 provision.go:87] duration metric: took 249.211016ms to configureAuth
	I0815 01:28:45.950065   66919 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:28:45.950301   66919 config.go:182] Loaded profile config "old-k8s-version-390782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:28:45.950412   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.953288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953746   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.953778   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953902   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.954098   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954358   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954569   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.954784   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.954953   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.954967   66919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:28:46.228321   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:28:46.228349   66919 machine.go:97] duration metric: took 882.921736ms to provisionDockerMachine
	I0815 01:28:46.228363   66919 start.go:293] postStartSetup for "old-k8s-version-390782" (driver="kvm2")
	I0815 01:28:46.228375   66919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:28:46.228401   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.228739   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:28:46.228774   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.231605   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.231993   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.232020   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.232216   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.232419   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.232698   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.232919   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.319433   66919 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:28:46.323340   66919 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:28:46.323373   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:28:46.323451   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:28:46.323555   66919 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:28:46.323658   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:28:46.332594   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:46.354889   66919 start.go:296] duration metric: took 126.511194ms for postStartSetup
	I0815 01:28:46.354930   66919 fix.go:56] duration metric: took 18.581671847s for fixHost
	I0815 01:28:46.354950   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.357987   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358251   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.358277   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358509   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.358747   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.358934   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.359092   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.359240   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:46.359425   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:46.359438   66919 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:28:46.469167   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685326.429908383
	
	I0815 01:28:46.469192   66919 fix.go:216] guest clock: 1723685326.429908383
	I0815 01:28:46.469202   66919 fix.go:229] Guest: 2024-08-15 01:28:46.429908383 +0000 UTC Remote: 2024-08-15 01:28:46.354934297 +0000 UTC m=+256.257437765 (delta=74.974086ms)
	I0815 01:28:46.469231   66919 fix.go:200] guest clock delta is within tolerance: 74.974086ms
	I0815 01:28:46.469236   66919 start.go:83] releasing machines lock for "old-k8s-version-390782", held for 18.696013068s
	I0815 01:28:46.469264   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.469527   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:46.472630   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473053   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.473082   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473746   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473931   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473998   66919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:28:46.474048   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.474159   66919 ssh_runner.go:195] Run: cat /version.json
	I0815 01:28:46.474188   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.476984   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477012   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477421   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477445   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477465   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477499   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477615   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477719   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477784   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477845   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477907   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477975   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.478048   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.585745   66919 ssh_runner.go:195] Run: systemctl --version
	I0815 01:28:46.592135   66919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:28:46.731888   66919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:28:46.739171   66919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:28:46.739238   66919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:28:46.760211   66919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:28:46.760232   66919 start.go:495] detecting cgroup driver to use...
	I0815 01:28:46.760316   66919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:28:46.778483   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:28:46.791543   66919 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:28:46.791632   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:28:46.804723   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:28:46.818794   66919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:28:46.931242   66919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:28:47.091098   66919 docker.go:233] disabling docker service ...
	I0815 01:28:47.091177   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:28:47.105150   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:28:47.117485   66919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:28:47.236287   66919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:28:47.376334   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:28:47.389397   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:28:47.406551   66919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 01:28:47.406627   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.416736   66919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:28:47.416803   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.427000   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.437833   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.449454   66919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:28:47.460229   66919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:28:47.469737   66919 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:28:47.469800   66919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:28:47.482270   66919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:28:47.491987   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:47.624462   66919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:28:47.759485   66919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:28:47.759546   66919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:28:47.764492   66919 start.go:563] Will wait 60s for crictl version
	I0815 01:28:47.764545   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:47.767890   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:28:47.814241   66919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:28:47.814342   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.842933   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.873241   66919 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 01:28:47.874283   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:47.877389   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.877763   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:47.877793   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.878008   66919 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 01:28:47.881794   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:47.893270   66919 kubeadm.go:883] updating cluster {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:28:47.893412   66919 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 01:28:47.893466   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:47.939402   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:47.939489   66919 ssh_runner.go:195] Run: which lz4
	I0815 01:28:47.943142   66919 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:28:47.947165   66919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:28:47.947191   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 01:28:49.418409   66919 crio.go:462] duration metric: took 1.475291539s to copy over tarball
	I0815 01:28:49.418479   66919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:28:46.493529   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Start
	I0815 01:28:46.493725   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring networks are active...
	I0815 01:28:46.494472   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring network default is active
	I0815 01:28:46.494805   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring network mk-embed-certs-190398 is active
	I0815 01:28:46.495206   67000 main.go:141] libmachine: (embed-certs-190398) Getting domain xml...
	I0815 01:28:46.496037   67000 main.go:141] libmachine: (embed-certs-190398) Creating domain...
	I0815 01:28:47.761636   67000 main.go:141] libmachine: (embed-certs-190398) Waiting to get IP...
	I0815 01:28:47.762736   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:47.763100   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:47.763157   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:47.763070   68098 retry.go:31] will retry after 304.161906ms: waiting for machine to come up
	I0815 01:28:48.068645   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.069177   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.069204   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.069148   68098 retry.go:31] will retry after 275.006558ms: waiting for machine to come up
	I0815 01:28:48.345793   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.346294   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.346331   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.346238   68098 retry.go:31] will retry after 325.359348ms: waiting for machine to come up
	I0815 01:28:48.673903   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.674489   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.674513   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.674447   68098 retry.go:31] will retry after 547.495848ms: waiting for machine to come up
	I0815 01:28:49.223465   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:49.224028   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:49.224062   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:49.223982   68098 retry.go:31] will retry after 471.418796ms: waiting for machine to come up
	I0815 01:28:49.696567   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:49.697064   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:49.697093   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:49.697019   68098 retry.go:31] will retry after 871.173809ms: waiting for machine to come up
	I0815 01:28:52.212767   66919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.794261663s)
	I0815 01:28:52.212795   66919 crio.go:469] duration metric: took 2.794358617s to extract the tarball
	I0815 01:28:52.212803   66919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:28:52.254542   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:52.286548   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:52.286571   66919 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:28:52.286651   66919 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.286675   66919 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 01:28:52.286687   66919 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.286684   66919 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.286704   66919 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.286645   66919 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.286672   66919 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.286649   66919 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288433   66919 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.288441   66919 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.288473   66919 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.288446   66919 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.288429   66919 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.288633   66919 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 01:28:52.526671   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 01:28:52.548397   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.556168   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.560115   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.563338   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.566306   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.576900   66919 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 01:28:52.576955   66919 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 01:28:52.576999   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.579694   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.639727   66919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 01:28:52.639778   66919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.639828   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.697299   66919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 01:28:52.697346   66919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.697397   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.709988   66919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 01:28:52.710026   66919 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 01:28:52.710051   66919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.710072   66919 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.710101   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710109   66919 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 01:28:52.710121   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710128   66919 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.710132   66919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 01:28:52.710146   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.710102   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.710159   66919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.710177   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.710159   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710198   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.768699   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.768764   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.768837   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.768892   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.768933   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.768954   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.800404   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.893131   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.893174   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.893241   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.918186   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.918203   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.918205   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.946507   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:53.037776   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:53.037991   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:53.039379   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:53.077479   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:53.077542   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 01:28:53.077559   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 01:28:53.096763   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 01:28:53.138129   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:53.153330   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 01:28:53.153366   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 01:28:53.153368   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 01:28:53.162469   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 01:28:53.292377   66919 cache_images.go:92] duration metric: took 1.005786902s to LoadCachedImages
	W0815 01:28:53.292485   66919 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0815 01:28:53.292503   66919 kubeadm.go:934] updating node { 192.168.50.21 8443 v1.20.0 crio true true} ...
	I0815 01:28:53.292682   66919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-390782 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:28:53.292781   66919 ssh_runner.go:195] Run: crio config
	I0815 01:28:53.339927   66919 cni.go:84] Creating CNI manager for ""
	I0815 01:28:53.339957   66919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:28:53.339979   66919 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:28:53.340009   66919 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.21 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-390782 NodeName:old-k8s-version-390782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 01:28:53.340183   66919 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-390782"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:28:53.340278   66919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 01:28:53.350016   66919 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:28:53.350117   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:28:53.359379   66919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 01:28:53.375719   66919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:28:53.392054   66919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 01:28:53.409122   66919 ssh_runner.go:195] Run: grep 192.168.50.21	control-plane.minikube.internal$ /etc/hosts
	I0815 01:28:53.412646   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:53.423917   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:53.560712   66919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:28:53.576488   66919 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782 for IP: 192.168.50.21
	I0815 01:28:53.576512   66919 certs.go:194] generating shared ca certs ...
	I0815 01:28:53.576530   66919 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:53.576748   66919 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:28:53.576823   66919 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:28:53.576837   66919 certs.go:256] generating profile certs ...
	I0815 01:28:53.576975   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.key
	I0815 01:28:53.577044   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key.d79afed6
	I0815 01:28:53.577113   66919 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key
	I0815 01:28:53.577274   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:28:53.577323   66919 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:28:53.577337   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:28:53.577369   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:28:53.577400   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:28:53.577431   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:28:53.577529   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:53.578239   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:28:53.622068   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:28:53.648947   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:28:53.681678   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:28:53.719636   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 01:28:53.744500   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:28:53.777941   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:28:53.810631   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:28:53.832906   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:28:53.854487   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:28:53.876448   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:28:53.898487   66919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:28:53.914102   66919 ssh_runner.go:195] Run: openssl version
	I0815 01:28:53.919563   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:28:53.929520   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933730   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933775   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.939056   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:28:53.948749   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:28:53.958451   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962624   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962669   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.967800   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:28:53.977228   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:28:53.986801   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990797   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990842   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.995930   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:28:54.005862   66919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:28:54.010115   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:28:54.015861   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:28:54.021980   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:28:54.028344   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:28:54.034172   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:28:54.040316   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:28:54.046525   66919 kubeadm.go:392] StartCluster: {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:28:54.046624   66919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:28:54.046671   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.086420   66919 cri.go:89] found id: ""
	I0815 01:28:54.086498   66919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:28:54.096425   66919 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:28:54.096449   66919 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:28:54.096500   66919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:28:54.106217   66919 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:28:54.107254   66919 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-390782" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:28:54.107872   66919 kubeconfig.go:62] /home/jenkins/minikube-integration/19443-13088/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-390782" cluster setting kubeconfig missing "old-k8s-version-390782" context setting]
	I0815 01:28:54.109790   66919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:54.140029   66919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:28:54.150180   66919 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.21
	I0815 01:28:54.150237   66919 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:28:54.150251   66919 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:28:54.150308   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.186400   66919 cri.go:89] found id: ""
	I0815 01:28:54.186485   66919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:28:54.203351   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:28:54.212828   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:28:54.212849   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:28:54.212910   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:28:54.221577   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:28:54.221641   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:28:54.230730   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:28:54.239213   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:28:54.239279   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:28:54.248268   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.256909   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:28:54.256968   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.266043   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:28:54.276366   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:28:54.276432   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:28:54.285945   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:28:54.295262   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:54.419237   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.098102   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:50.569917   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:50.570436   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:50.570465   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:50.570394   68098 retry.go:31] will retry after 775.734951ms: waiting for machine to come up
	I0815 01:28:51.347459   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:51.347917   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:51.347944   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:51.347869   68098 retry.go:31] will retry after 1.319265032s: waiting for machine to come up
	I0815 01:28:52.668564   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:52.669049   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:52.669116   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:52.669015   68098 retry.go:31] will retry after 1.765224181s: waiting for machine to come up
	I0815 01:28:54.435556   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:54.436039   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:54.436071   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:54.435975   68098 retry.go:31] will retry after 1.545076635s: waiting for machine to come up
	I0815 01:28:55.318597   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.420419   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.514727   66919 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:28:55.514825   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.015883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.515816   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.015709   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.515895   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.015127   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.515796   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.014975   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.515893   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:00.015918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:55.982693   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:55.983288   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:55.983328   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:55.983112   68098 retry.go:31] will retry after 2.788039245s: waiting for machine to come up
	I0815 01:28:58.773761   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:58.774166   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:58.774194   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:58.774087   68098 retry.go:31] will retry after 2.531335813s: waiting for machine to come up
	I0815 01:29:00.514933   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.015014   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.515780   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.015534   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.515502   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.015539   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.515643   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.015544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.515786   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:05.015882   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.309051   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:01.309593   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:29:01.309634   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:29:01.309552   68098 retry.go:31] will retry after 3.239280403s: waiting for machine to come up
	I0815 01:29:04.552370   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.552978   67000 main.go:141] libmachine: (embed-certs-190398) Found IP for machine: 192.168.72.151
	I0815 01:29:04.553002   67000 main.go:141] libmachine: (embed-certs-190398) Reserving static IP address...
	I0815 01:29:04.553047   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has current primary IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.553427   67000 main.go:141] libmachine: (embed-certs-190398) Reserved static IP address: 192.168.72.151
	I0815 01:29:04.553452   67000 main.go:141] libmachine: (embed-certs-190398) Waiting for SSH to be available...
	I0815 01:29:04.553481   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "embed-certs-190398", mac: "52:54:00:5a:91:1a", ip: "192.168.72.151"} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.553510   67000 main.go:141] libmachine: (embed-certs-190398) DBG | skip adding static IP to network mk-embed-certs-190398 - found existing host DHCP lease matching {name: "embed-certs-190398", mac: "52:54:00:5a:91:1a", ip: "192.168.72.151"}
	I0815 01:29:04.553525   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Getting to WaitForSSH function...
	I0815 01:29:04.555694   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.556036   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.556067   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.556168   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Using SSH client type: external
	I0815 01:29:04.556189   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa (-rw-------)
	I0815 01:29:04.556221   67000 main.go:141] libmachine: (embed-certs-190398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:04.556235   67000 main.go:141] libmachine: (embed-certs-190398) DBG | About to run SSH command:
	I0815 01:29:04.556252   67000 main.go:141] libmachine: (embed-certs-190398) DBG | exit 0
	I0815 01:29:04.680599   67000 main.go:141] libmachine: (embed-certs-190398) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:04.680961   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetConfigRaw
	I0815 01:29:04.681526   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:04.683847   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.684244   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.684270   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.684531   67000 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/config.json ...
	I0815 01:29:04.684755   67000 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:04.684772   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:04.684989   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.687469   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.687823   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.687848   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.687972   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.688135   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.688267   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.688389   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.688525   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.688749   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.688761   67000 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:04.788626   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:04.788670   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:04.788914   67000 buildroot.go:166] provisioning hostname "embed-certs-190398"
	I0815 01:29:04.788940   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:04.789136   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.791721   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.792153   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.792198   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.792398   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.792580   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.792756   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.792861   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.793053   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.793293   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.793312   67000 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-190398 && echo "embed-certs-190398" | sudo tee /etc/hostname
	I0815 01:29:04.910133   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-190398
	
	I0815 01:29:04.910160   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.913241   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.913666   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.913701   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.913887   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.914131   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.914336   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.914491   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.914665   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.914884   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.914909   67000 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-190398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-190398/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-190398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:05.025052   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:05.025089   67000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:05.025115   67000 buildroot.go:174] setting up certificates
	I0815 01:29:05.025127   67000 provision.go:84] configureAuth start
	I0815 01:29:05.025139   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:05.025439   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:05.028224   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.028582   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.028618   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.028753   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.030960   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.031305   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.031335   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.031524   67000 provision.go:143] copyHostCerts
	I0815 01:29:05.031598   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:05.031608   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:05.031663   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:05.031745   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:05.031752   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:05.031773   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:05.031825   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:05.031832   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:05.031849   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:05.031909   67000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.embed-certs-190398 san=[127.0.0.1 192.168.72.151 embed-certs-190398 localhost minikube]
	I0815 01:29:05.246512   67000 provision.go:177] copyRemoteCerts
	I0815 01:29:05.246567   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:05.246590   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.249286   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.249570   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.249609   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.249736   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.249933   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.250109   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.250337   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.330596   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 01:29:05.352611   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:29:05.374001   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:05.394724   67000 provision.go:87] duration metric: took 369.584008ms to configureAuth
	I0815 01:29:05.394750   67000 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:05.394917   67000 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:05.394982   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.397305   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.397620   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.397658   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.397748   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.397924   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.398039   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.398150   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.398297   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:05.398465   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:05.398486   67000 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:05.893255   67451 start.go:364] duration metric: took 3m0.662991861s to acquireMachinesLock for "default-k8s-diff-port-018537"
	I0815 01:29:05.893347   67451 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:29:05.893356   67451 fix.go:54] fixHost starting: 
	I0815 01:29:05.893803   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:05.893846   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:05.910516   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I0815 01:29:05.910882   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:05.911391   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:05.911415   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:05.911748   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:05.911959   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:05.912088   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:05.913672   67451 fix.go:112] recreateIfNeeded on default-k8s-diff-port-018537: state=Stopped err=<nil>
	I0815 01:29:05.913699   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	W0815 01:29:05.913861   67451 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:29:05.915795   67451 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-018537" ...
	I0815 01:29:05.666194   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:05.666225   67000 machine.go:97] duration metric: took 981.45738ms to provisionDockerMachine
	I0815 01:29:05.666241   67000 start.go:293] postStartSetup for "embed-certs-190398" (driver="kvm2")
	I0815 01:29:05.666253   67000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:05.666275   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.666640   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:05.666671   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.669648   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.670098   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.670124   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.670300   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.670507   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.670677   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.670835   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.750950   67000 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:05.755040   67000 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:05.755066   67000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:05.755139   67000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:05.755244   67000 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:05.755366   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:05.764271   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:05.786563   67000 start.go:296] duration metric: took 120.295403ms for postStartSetup
	I0815 01:29:05.786609   67000 fix.go:56] duration metric: took 19.317192467s for fixHost
	I0815 01:29:05.786634   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.789273   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.789677   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.789708   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.789886   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.790082   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.790244   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.790371   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.790654   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:05.790815   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:05.790826   67000 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:05.893102   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685345.869278337
	
	I0815 01:29:05.893123   67000 fix.go:216] guest clock: 1723685345.869278337
	I0815 01:29:05.893131   67000 fix.go:229] Guest: 2024-08-15 01:29:05.869278337 +0000 UTC Remote: 2024-08-15 01:29:05.786613294 +0000 UTC m=+270.290281945 (delta=82.665043ms)
	I0815 01:29:05.893159   67000 fix.go:200] guest clock delta is within tolerance: 82.665043ms
	I0815 01:29:05.893165   67000 start.go:83] releasing machines lock for "embed-certs-190398", held for 19.423784798s
	I0815 01:29:05.893192   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.893484   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:05.896152   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.896528   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.896555   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.896735   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897183   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897392   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897480   67000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:05.897536   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.897681   67000 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:05.897704   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.900443   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900543   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900814   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.900845   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900873   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.900891   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.901123   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.901150   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.901342   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.901346   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.901531   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.901531   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.901708   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.901709   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:06.008891   67000 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:06.014975   67000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:06.158062   67000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:06.164485   67000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:06.164550   67000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:06.180230   67000 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:06.180250   67000 start.go:495] detecting cgroup driver to use...
	I0815 01:29:06.180301   67000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:06.197927   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:06.210821   67000 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:06.210885   67000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:06.225614   67000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:06.239266   67000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:06.357793   67000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:06.511990   67000 docker.go:233] disabling docker service ...
	I0815 01:29:06.512061   67000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:06.529606   67000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:06.547241   67000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:06.689512   67000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:06.807041   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:06.820312   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:06.837948   67000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:06.838011   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.848233   67000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:06.848311   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.858132   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.868009   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.879629   67000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:06.893713   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.907444   67000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.928032   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.943650   67000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:06.957750   67000 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:06.957805   67000 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:06.972288   67000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:06.982187   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:07.154389   67000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:07.287847   67000 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:07.287933   67000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:07.292283   67000 start.go:563] Will wait 60s for crictl version
	I0815 01:29:07.292342   67000 ssh_runner.go:195] Run: which crictl
	I0815 01:29:07.295813   67000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:07.332788   67000 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:07.332889   67000 ssh_runner.go:195] Run: crio --version
	I0815 01:29:07.359063   67000 ssh_runner.go:195] Run: crio --version
	I0815 01:29:07.387496   67000 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:05.917276   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Start
	I0815 01:29:05.917498   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring networks are active...
	I0815 01:29:05.918269   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring network default is active
	I0815 01:29:05.918599   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring network mk-default-k8s-diff-port-018537 is active
	I0815 01:29:05.919147   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Getting domain xml...
	I0815 01:29:05.919829   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Creating domain...
	I0815 01:29:07.208213   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting to get IP...
	I0815 01:29:07.209456   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.209848   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.209933   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.209843   68264 retry.go:31] will retry after 254.654585ms: waiting for machine to come up
	I0815 01:29:07.466248   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.466679   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.466708   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.466644   68264 retry.go:31] will retry after 285.54264ms: waiting for machine to come up
	I0815 01:29:07.754037   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.754537   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.754578   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.754511   68264 retry.go:31] will retry after 336.150506ms: waiting for machine to come up
	I0815 01:29:08.091923   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.092402   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.092444   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:08.092368   68264 retry.go:31] will retry after 591.285134ms: waiting for machine to come up
	I0815 01:29:08.685380   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.685707   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.685735   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:08.685690   68264 retry.go:31] will retry after 701.709425ms: waiting for machine to come up
	I0815 01:29:09.388574   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:09.389026   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:09.389053   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:09.388979   68264 retry.go:31] will retry after 916.264423ms: waiting for machine to come up
	I0815 01:29:05.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.015647   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.514952   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.014969   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.515614   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.015757   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.515184   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.014931   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.515381   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.015761   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.389220   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:07.392416   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:07.392842   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:07.392868   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:07.393095   67000 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:07.396984   67000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:07.410153   67000 kubeadm.go:883] updating cluster {Name:embed-certs-190398 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:07.410275   67000 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:07.410348   67000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:07.447193   67000 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:07.447255   67000 ssh_runner.go:195] Run: which lz4
	I0815 01:29:07.451046   67000 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:29:07.454808   67000 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:29:07.454836   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:29:08.696070   67000 crio.go:462] duration metric: took 1.245060733s to copy over tarball
	I0815 01:29:08.696174   67000 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:29:10.306552   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:10.306969   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:10.307001   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:10.306912   68264 retry.go:31] will retry after 1.186920529s: waiting for machine to come up
	I0815 01:29:11.494832   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:11.495288   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:11.495324   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:11.495213   68264 retry.go:31] will retry after 1.049148689s: waiting for machine to come up
	I0815 01:29:12.546492   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:12.546872   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:12.546898   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:12.546844   68264 retry.go:31] will retry after 1.689384408s: waiting for machine to come up
	I0815 01:29:14.237471   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:14.238081   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:14.238134   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:14.238011   68264 retry.go:31] will retry after 1.557759414s: waiting for machine to come up
	I0815 01:29:10.515131   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.515740   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.015002   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.515169   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.015676   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.515330   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.015193   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.515742   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.015837   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.809989   67000 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.113786525s)
	I0815 01:29:10.810014   67000 crio.go:469] duration metric: took 2.113915636s to extract the tarball
	I0815 01:29:10.810021   67000 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:29:10.845484   67000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:10.886403   67000 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:29:10.886424   67000 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:29:10.886433   67000 kubeadm.go:934] updating node { 192.168.72.151 8443 v1.31.0 crio true true} ...
	I0815 01:29:10.886550   67000 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-190398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:29:10.886646   67000 ssh_runner.go:195] Run: crio config
	I0815 01:29:10.933915   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:29:10.933946   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:10.933963   67000 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:29:10.933985   67000 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-190398 NodeName:embed-certs-190398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:29:10.934114   67000 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-190398"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:29:10.934179   67000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:29:10.943778   67000 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:29:10.943839   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:29:10.952852   67000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0815 01:29:10.968026   67000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:29:10.982813   67000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0815 01:29:10.998314   67000 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I0815 01:29:11.001818   67000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:11.012933   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:11.147060   67000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:11.170825   67000 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398 for IP: 192.168.72.151
	I0815 01:29:11.170850   67000 certs.go:194] generating shared ca certs ...
	I0815 01:29:11.170871   67000 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:11.171064   67000 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:29:11.171131   67000 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:29:11.171146   67000 certs.go:256] generating profile certs ...
	I0815 01:29:11.171251   67000 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/client.key
	I0815 01:29:11.171359   67000 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.key.7cdd5698
	I0815 01:29:11.171414   67000 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.key
	I0815 01:29:11.171556   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:29:11.171593   67000 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:29:11.171602   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:29:11.171624   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:29:11.171647   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:29:11.171676   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:29:11.171730   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:11.172346   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:29:11.208182   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:29:11.236641   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:29:11.277018   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:29:11.304926   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 01:29:11.335397   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:29:11.358309   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:29:11.380632   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:29:11.403736   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:29:11.425086   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:29:11.448037   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:29:11.470461   67000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:29:11.486415   67000 ssh_runner.go:195] Run: openssl version
	I0815 01:29:11.492028   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:29:11.502925   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.507270   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.507323   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.513051   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:29:11.523911   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:29:11.534614   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.538753   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.538813   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.544194   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:29:11.554387   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:29:11.564690   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.568810   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.568873   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.575936   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:29:11.589152   67000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:29:11.594614   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:29:11.601880   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:29:11.609471   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:29:11.617010   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:29:11.623776   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:29:11.629262   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:29:11.634708   67000 kubeadm.go:392] StartCluster: {Name:embed-certs-190398 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:29:11.634821   67000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:29:11.634890   67000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:11.676483   67000 cri.go:89] found id: ""
	I0815 01:29:11.676559   67000 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:29:11.686422   67000 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:29:11.686445   67000 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:29:11.686494   67000 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:29:11.695319   67000 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:29:11.696472   67000 kubeconfig.go:125] found "embed-certs-190398" server: "https://192.168.72.151:8443"
	I0815 01:29:11.699906   67000 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:29:11.709090   67000 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.151
	I0815 01:29:11.709119   67000 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:29:11.709145   67000 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:29:11.709211   67000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:11.742710   67000 cri.go:89] found id: ""
	I0815 01:29:11.742786   67000 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:29:11.758986   67000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:29:11.768078   67000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:29:11.768100   67000 kubeadm.go:157] found existing configuration files:
	
	I0815 01:29:11.768150   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:29:11.776638   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:29:11.776724   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:29:11.785055   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:29:11.793075   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:29:11.793127   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:29:11.801516   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:29:11.809527   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:29:11.809572   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:29:11.817855   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:29:11.826084   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:29:11.826157   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:29:11.835699   67000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:29:11.844943   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:11.961226   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.030548   67000 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069293244s)
	I0815 01:29:13.030577   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.218385   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.302667   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.397530   67000 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:29:13.397630   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.898538   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.398613   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.897833   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.397759   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.798041   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:15.798467   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:15.798512   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:15.798446   68264 retry.go:31] will retry after 2.538040218s: waiting for machine to come up
	I0815 01:29:18.338522   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:18.338961   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:18.338988   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:18.338910   68264 retry.go:31] will retry after 3.121146217s: waiting for machine to come up
	I0815 01:29:15.515901   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.015290   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.514956   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.015924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.515782   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.014890   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.515482   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.515830   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.015304   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.897957   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.910962   67000 api_server.go:72] duration metric: took 2.513430323s to wait for apiserver process to appear ...
	I0815 01:29:15.910999   67000 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:29:15.911033   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.650453   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:18.650485   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:18.650498   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.686925   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:18.686951   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:18.911228   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.915391   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:18.915424   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:19.412000   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:19.419523   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:19.419562   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:19.911102   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:19.918074   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:19.918110   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:20.411662   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:20.417395   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0815 01:29:20.423058   67000 api_server.go:141] control plane version: v1.31.0
	I0815 01:29:20.423081   67000 api_server.go:131] duration metric: took 4.512072378s to wait for apiserver health ...
	I0815 01:29:20.423089   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:29:20.423095   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:20.424876   67000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:29:20.426131   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:29:20.450961   67000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:29:20.474210   67000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:29:20.486417   67000 system_pods.go:59] 8 kube-system pods found
	I0815 01:29:20.486452   67000 system_pods.go:61] "coredns-6f6b679f8f-kgklr" [5e07a5eb-5ff5-4c1c-9fc7-0a266389c235] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:29:20.486463   67000 system_pods.go:61] "etcd-embed-certs-190398" [11567f44-26c0-4cdc-81f4-d7f88eb415e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:29:20.486480   67000 system_pods.go:61] "kube-apiserver-embed-certs-190398" [da9ce1f1-705f-4b23-ace7-794d277e5d44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:29:20.486495   67000 system_pods.go:61] "kube-controller-manager-embed-certs-190398" [0a4c8153-f94c-4d24-9d2f-38e3eebd8649] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:29:20.486509   67000 system_pods.go:61] "kube-proxy-bmddn" [50e8d666-29d5-45b6-82a7-608402dfb7b1] Running
	I0815 01:29:20.486515   67000 system_pods.go:61] "kube-scheduler-embed-certs-190398" [483d04a2-16c4-4c0d-81e2-dbdfa2141981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:29:20.486520   67000 system_pods.go:61] "metrics-server-6867b74b74-sfnng" [c2088569-2e49-4ccd-bd7c-bcd454e75b1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:29:20.486528   67000 system_pods.go:61] "storage-provisioner" [ad082138-0c63-43a5-8052-5a7126a6ec77] Running
	I0815 01:29:20.486534   67000 system_pods.go:74] duration metric: took 12.306432ms to wait for pod list to return data ...
	I0815 01:29:20.486546   67000 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:29:20.489727   67000 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:29:20.489751   67000 node_conditions.go:123] node cpu capacity is 2
	I0815 01:29:20.489763   67000 node_conditions.go:105] duration metric: took 3.21035ms to run NodePressure ...
	I0815 01:29:20.489782   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:21.461547   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:21.462048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:21.462083   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:21.462013   68264 retry.go:31] will retry after 4.52196822s: waiting for machine to come up
	I0815 01:29:20.515183   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.015283   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.515686   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.015404   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.515935   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.015577   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.515114   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.015146   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.515849   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:25.014883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.750707   67000 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:29:20.766067   67000 kubeadm.go:739] kubelet initialised
	I0815 01:29:20.766089   67000 kubeadm.go:740] duration metric: took 15.355118ms waiting for restarted kubelet to initialise ...
	I0815 01:29:20.766099   67000 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:20.771715   67000 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.778596   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.778617   67000 pod_ready.go:81] duration metric: took 6.879509ms for pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.778630   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.778638   67000 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.783422   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "etcd-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.783450   67000 pod_ready.go:81] duration metric: took 4.801812ms for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.783461   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "etcd-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.783473   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.788877   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.788896   67000 pod_ready.go:81] duration metric: took 5.41319ms for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.788904   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.788909   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:22.795340   67000 pod_ready.go:102] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:25.296907   67000 pod_ready.go:102] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:27.201181   66492 start.go:364] duration metric: took 54.426048174s to acquireMachinesLock for "no-preload-884893"
	I0815 01:29:27.201235   66492 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:29:27.201317   66492 fix.go:54] fixHost starting: 
	I0815 01:29:27.201776   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:27.201818   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:27.218816   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0815 01:29:27.219223   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:27.219731   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:29:27.219754   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:27.220146   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:27.220342   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:27.220507   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:29:27.221962   66492 fix.go:112] recreateIfNeeded on no-preload-884893: state=Stopped err=<nil>
	I0815 01:29:27.221988   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	W0815 01:29:27.222177   66492 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:29:27.224523   66492 out.go:177] * Restarting existing kvm2 VM for "no-preload-884893" ...
	I0815 01:29:25.986027   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.986585   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Found IP for machine: 192.168.39.223
	I0815 01:29:25.986616   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has current primary IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.986629   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Reserving static IP address...
	I0815 01:29:25.987034   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-018537", mac: "52:54:00:ec:53:52", ip: "192.168.39.223"} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:25.987066   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | skip adding static IP to network mk-default-k8s-diff-port-018537 - found existing host DHCP lease matching {name: "default-k8s-diff-port-018537", mac: "52:54:00:ec:53:52", ip: "192.168.39.223"}
	I0815 01:29:25.987085   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Reserved static IP address: 192.168.39.223
	I0815 01:29:25.987108   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for SSH to be available...
	I0815 01:29:25.987124   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Getting to WaitForSSH function...
	I0815 01:29:25.989426   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.989800   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:25.989831   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.989937   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Using SSH client type: external
	I0815 01:29:25.989962   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa (-rw-------)
	I0815 01:29:25.990011   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:25.990026   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | About to run SSH command:
	I0815 01:29:25.990048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | exit 0
	I0815 01:29:26.121218   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:26.121655   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetConfigRaw
	I0815 01:29:26.122265   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:26.125083   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.125483   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.125513   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.125757   67451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:29:26.125978   67451 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:26.126004   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:26.126235   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.128419   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.128787   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.128814   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.128963   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.129124   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.129274   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.129420   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.129603   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.129828   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.129843   67451 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:26.236866   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:26.236900   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.237136   67451 buildroot.go:166] provisioning hostname "default-k8s-diff-port-018537"
	I0815 01:29:26.237158   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.237334   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.240243   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.240760   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.240791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.240959   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.241203   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.241415   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.241581   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.241741   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.241903   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.241916   67451 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-018537 && echo "default-k8s-diff-port-018537" | sudo tee /etc/hostname
	I0815 01:29:26.358127   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-018537
	
	I0815 01:29:26.358159   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.361276   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.361664   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.361694   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.361841   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.362013   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.362191   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.362368   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.362517   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.362704   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.362729   67451 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-018537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-018537/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-018537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:26.479326   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:26.479357   67451 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:26.479398   67451 buildroot.go:174] setting up certificates
	I0815 01:29:26.479411   67451 provision.go:84] configureAuth start
	I0815 01:29:26.479440   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.479791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:26.482464   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.482845   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.482873   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.483023   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.485502   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.485960   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.485995   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.486135   67451 provision.go:143] copyHostCerts
	I0815 01:29:26.486194   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:26.486214   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:26.486273   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:26.486384   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:26.486394   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:26.486419   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:26.486480   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:26.486487   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:26.486508   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:26.486573   67451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-018537 san=[127.0.0.1 192.168.39.223 default-k8s-diff-port-018537 localhost minikube]
	I0815 01:29:26.563251   67451 provision.go:177] copyRemoteCerts
	I0815 01:29:26.563309   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:26.563337   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.566141   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.566481   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.566506   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.566737   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.566947   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.567087   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.567208   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:26.650593   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:26.673166   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0815 01:29:26.695563   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:29:26.717169   67451 provision.go:87] duration metric: took 237.742408ms to configureAuth
	I0815 01:29:26.717198   67451 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:26.717373   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:26.717453   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.720247   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.720620   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.720648   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.720815   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.721007   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.721176   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.721302   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.721484   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.721663   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.721681   67451 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:26.972647   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:26.972691   67451 machine.go:97] duration metric: took 846.694776ms to provisionDockerMachine
	I0815 01:29:26.972706   67451 start.go:293] postStartSetup for "default-k8s-diff-port-018537" (driver="kvm2")
	I0815 01:29:26.972716   67451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:26.972731   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:26.973032   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:26.973053   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.975828   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.976300   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.976334   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.976531   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.976827   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.976999   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.977111   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.059130   67451 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:27.062867   67451 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:27.062893   67451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:27.062954   67451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:27.063024   67451 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:27.063119   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:27.072111   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:27.093976   67451 start.go:296] duration metric: took 121.256938ms for postStartSetup
	I0815 01:29:27.094023   67451 fix.go:56] duration metric: took 21.200666941s for fixHost
	I0815 01:29:27.094048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.096548   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.096881   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.096912   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.097059   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.097238   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.097400   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.097511   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.097664   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:27.097842   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:27.097858   67451 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:27.201028   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685367.180566854
	
	I0815 01:29:27.201053   67451 fix.go:216] guest clock: 1723685367.180566854
	I0815 01:29:27.201062   67451 fix.go:229] Guest: 2024-08-15 01:29:27.180566854 +0000 UTC Remote: 2024-08-15 01:29:27.094027897 +0000 UTC m=+201.997769057 (delta=86.538957ms)
	I0815 01:29:27.201100   67451 fix.go:200] guest clock delta is within tolerance: 86.538957ms
	I0815 01:29:27.201107   67451 start.go:83] releasing machines lock for "default-k8s-diff-port-018537", held for 21.307794339s
	I0815 01:29:27.201135   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.201522   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:27.204278   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.204674   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.204703   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.204934   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205501   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205713   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205800   67451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:27.205849   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.206127   67451 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:27.206149   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.208688   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.208858   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209066   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.209092   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209394   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.209551   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.209552   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.209584   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209741   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.209748   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.209952   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.210001   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.210090   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.210256   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.293417   67451 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:27.329491   67451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:27.473782   67451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:27.480357   67451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:27.480432   67451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:27.499552   67451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:27.499582   67451 start.go:495] detecting cgroup driver to use...
	I0815 01:29:27.499650   67451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:27.515626   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:27.534025   67451 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:27.534098   67451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:27.547536   67451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:27.561135   67451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:27.672622   67451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:27.832133   67451 docker.go:233] disabling docker service ...
	I0815 01:29:27.832210   67451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:27.845647   67451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:27.858233   67451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:27.985504   67451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:28.119036   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:28.133844   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:28.151116   67451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:28.151188   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.162173   67451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:28.162250   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.171954   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.182363   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.192943   67451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:28.203684   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.214360   67451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.230572   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.241283   67451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:28.250743   67451 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:28.250804   67451 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:28.263655   67451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:28.273663   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:28.408232   67451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:28.558860   67451 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:28.558933   67451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:28.564390   67451 start.go:563] Will wait 60s for crictl version
	I0815 01:29:28.564508   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:29:28.568351   67451 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:28.616006   67451 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:28.616094   67451 ssh_runner.go:195] Run: crio --version
	I0815 01:29:28.642621   67451 ssh_runner.go:195] Run: crio --version
	I0815 01:29:28.671150   67451 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:28.672626   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:28.675626   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:28.676004   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:28.676038   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:28.676296   67451 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:28.680836   67451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:28.694402   67451 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:28.694519   67451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:28.694574   67451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:28.730337   67451 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:28.730401   67451 ssh_runner.go:195] Run: which lz4
	I0815 01:29:28.734226   67451 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:29:28.738162   67451 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:29:28.738185   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:29:30.016492   67451 crio.go:462] duration metric: took 1.282301387s to copy over tarball
	I0815 01:29:30.016571   67451 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:29:25.515881   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.015741   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.515122   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.014889   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.515108   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.015604   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.515658   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.015319   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.515225   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.015561   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.225775   66492 main.go:141] libmachine: (no-preload-884893) Calling .Start
	I0815 01:29:27.225974   66492 main.go:141] libmachine: (no-preload-884893) Ensuring networks are active...
	I0815 01:29:27.226702   66492 main.go:141] libmachine: (no-preload-884893) Ensuring network default is active
	I0815 01:29:27.227078   66492 main.go:141] libmachine: (no-preload-884893) Ensuring network mk-no-preload-884893 is active
	I0815 01:29:27.227577   66492 main.go:141] libmachine: (no-preload-884893) Getting domain xml...
	I0815 01:29:27.228376   66492 main.go:141] libmachine: (no-preload-884893) Creating domain...
	I0815 01:29:28.609215   66492 main.go:141] libmachine: (no-preload-884893) Waiting to get IP...
	I0815 01:29:28.610043   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:28.610440   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:28.610487   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:28.610415   68431 retry.go:31] will retry after 305.851347ms: waiting for machine to come up
	I0815 01:29:28.918245   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:28.918747   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:28.918770   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:28.918720   68431 retry.go:31] will retry after 368.045549ms: waiting for machine to come up
	I0815 01:29:29.288313   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:29.289013   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:29.289046   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:29.288958   68431 retry.go:31] will retry after 415.68441ms: waiting for machine to come up
	I0815 01:29:29.706767   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:29.707226   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:29.707249   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:29.707180   68431 retry.go:31] will retry after 575.538038ms: waiting for machine to come up
	I0815 01:29:26.795064   67000 pod_ready.go:92] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:26.795085   67000 pod_ready.go:81] duration metric: took 6.006168181s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.795096   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bmddn" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.799159   67000 pod_ready.go:92] pod "kube-proxy-bmddn" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:26.799176   67000 pod_ready.go:81] duration metric: took 4.074526ms for pod "kube-proxy-bmddn" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.799184   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:28.805591   67000 pod_ready.go:102] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:30.306235   67000 pod_ready.go:92] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:30.306262   67000 pod_ready.go:81] duration metric: took 3.507070811s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:30.306273   67000 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:32.131219   67451 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.114619197s)
	I0815 01:29:32.131242   67451 crio.go:469] duration metric: took 2.114723577s to extract the tarball
	I0815 01:29:32.131249   67451 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:29:32.169830   67451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:32.217116   67451 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:29:32.217139   67451 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:29:32.217146   67451 kubeadm.go:934] updating node { 192.168.39.223 8444 v1.31.0 crio true true} ...
	I0815 01:29:32.217245   67451 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-018537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:29:32.217305   67451 ssh_runner.go:195] Run: crio config
	I0815 01:29:32.272237   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:29:32.272257   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:32.272270   67451 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:29:32.272292   67451 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.223 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-018537 NodeName:default-k8s-diff-port-018537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:29:32.272435   67451 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.223
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-018537"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:29:32.272486   67451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:29:32.282454   67451 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:29:32.282510   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:29:32.291448   67451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0815 01:29:32.307026   67451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:29:32.324183   67451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0815 01:29:32.339298   67451 ssh_runner.go:195] Run: grep 192.168.39.223	control-plane.minikube.internal$ /etc/hosts
	I0815 01:29:32.342644   67451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:32.353518   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:32.468014   67451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:32.484049   67451 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537 for IP: 192.168.39.223
	I0815 01:29:32.484075   67451 certs.go:194] generating shared ca certs ...
	I0815 01:29:32.484097   67451 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:32.484263   67451 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:29:32.484313   67451 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:29:32.484326   67451 certs.go:256] generating profile certs ...
	I0815 01:29:32.484436   67451 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.key
	I0815 01:29:32.484511   67451 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.key.141a85fa
	I0815 01:29:32.484564   67451 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.key
	I0815 01:29:32.484747   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:29:32.484787   67451 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:29:32.484797   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:29:32.484828   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:29:32.484869   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:29:32.484896   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:29:32.484953   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:32.485741   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:29:32.521657   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:29:32.556226   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:29:32.585724   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:29:32.619588   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 01:29:32.649821   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:29:32.677343   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:29:32.699622   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:29:32.721142   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:29:32.742388   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:29:32.766476   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:29:32.788341   67451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:29:32.803728   67451 ssh_runner.go:195] Run: openssl version
	I0815 01:29:32.809178   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:29:32.819091   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.823068   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.823119   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.828361   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:29:32.837721   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:29:32.847217   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.851176   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.851220   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.856303   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:29:32.865672   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:29:32.875695   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.879910   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.879961   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.885240   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:29:32.894951   67451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:29:32.899131   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:29:32.904465   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:29:32.910243   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:29:32.915874   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:29:32.921193   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:29:32.926569   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:29:32.931905   67451 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:29:32.932015   67451 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:29:32.932095   67451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:32.967184   67451 cri.go:89] found id: ""
	I0815 01:29:32.967270   67451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:29:32.977083   67451 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:29:32.977105   67451 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:29:32.977146   67451 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:29:32.986934   67451 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:29:32.988393   67451 kubeconfig.go:125] found "default-k8s-diff-port-018537" server: "https://192.168.39.223:8444"
	I0815 01:29:32.991478   67451 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:29:33.000175   67451 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.223
	I0815 01:29:33.000201   67451 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:29:33.000211   67451 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:29:33.000260   67451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:33.042092   67451 cri.go:89] found id: ""
	I0815 01:29:33.042173   67451 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:29:33.058312   67451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:29:33.067931   67451 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:29:33.067951   67451 kubeadm.go:157] found existing configuration files:
	
	I0815 01:29:33.068005   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0815 01:29:33.076467   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:29:33.076532   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:29:33.085318   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0815 01:29:33.093657   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:29:33.093710   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:29:33.102263   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0815 01:29:33.110120   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:29:33.110166   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:29:33.118497   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0815 01:29:33.126969   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:29:33.127017   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:29:33.135332   67451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:29:33.143869   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:33.257728   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.000703   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.223362   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.296248   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.400251   67451 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:29:34.400365   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.901010   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.515518   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.015099   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.514899   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.015422   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.515483   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.015471   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.515843   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.015059   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.514953   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.015692   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.283919   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:30.284357   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:30.284387   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:30.284314   68431 retry.go:31] will retry after 737.00152ms: waiting for machine to come up
	I0815 01:29:31.023083   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:31.023593   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:31.023620   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:31.023541   68431 retry.go:31] will retry after 851.229647ms: waiting for machine to come up
	I0815 01:29:31.876610   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:31.877022   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:31.877051   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:31.876972   68431 retry.go:31] will retry after 914.072719ms: waiting for machine to come up
	I0815 01:29:32.792245   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:32.792723   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:32.792749   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:32.792674   68431 retry.go:31] will retry after 1.383936582s: waiting for machine to come up
	I0815 01:29:34.178425   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:34.178889   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:34.178928   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:34.178825   68431 retry.go:31] will retry after 1.574004296s: waiting for machine to come up
	I0815 01:29:32.314820   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:34.812868   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:35.400782   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.900844   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.400575   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.900769   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.916400   67451 api_server.go:72] duration metric: took 2.516148893s to wait for apiserver process to appear ...
	I0815 01:29:36.916432   67451 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:29:36.916458   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.650207   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:39.650234   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:39.650246   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.704636   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:39.704687   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:39.917074   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.921711   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:39.921742   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:35.514869   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.015361   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.515461   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.015560   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.514995   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.015431   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.515382   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.014971   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.515702   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:40.015185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.754518   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:35.755025   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:35.755049   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:35.754951   68431 retry.go:31] will retry after 1.763026338s: waiting for machine to come up
	I0815 01:29:37.519406   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:37.519910   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:37.519940   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:37.519857   68431 retry.go:31] will retry after 1.953484546s: waiting for machine to come up
	I0815 01:29:39.475118   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:39.475481   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:39.475617   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:39.475446   68431 retry.go:31] will retry after 3.524055081s: waiting for machine to come up
	I0815 01:29:36.813811   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:39.312364   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:40.417362   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:40.421758   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:40.421793   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:40.917290   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:40.929914   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:40.929979   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:41.417095   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:41.422436   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 200:
	ok
	I0815 01:29:41.430162   67451 api_server.go:141] control plane version: v1.31.0
	I0815 01:29:41.430190   67451 api_server.go:131] duration metric: took 4.513750685s to wait for apiserver health ...
	I0815 01:29:41.430201   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:29:41.430210   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:41.432041   67451 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:29:41.433158   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:29:41.465502   67451 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:29:41.488013   67451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:29:41.500034   67451 system_pods.go:59] 8 kube-system pods found
	I0815 01:29:41.500063   67451 system_pods.go:61] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:29:41.500071   67451 system_pods.go:61] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:29:41.500087   67451 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:29:41.500098   67451 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:29:41.500102   67451 system_pods.go:61] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:29:41.500107   67451 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:29:41.500117   67451 system_pods.go:61] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:29:41.500120   67451 system_pods.go:61] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:29:41.500126   67451 system_pods.go:74] duration metric: took 12.091408ms to wait for pod list to return data ...
	I0815 01:29:41.500137   67451 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:29:41.505113   67451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:29:41.505137   67451 node_conditions.go:123] node cpu capacity is 2
	I0815 01:29:41.505154   67451 node_conditions.go:105] duration metric: took 5.005028ms to run NodePressure ...
	I0815 01:29:41.505170   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:41.761818   67451 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:29:41.767941   67451 kubeadm.go:739] kubelet initialised
	I0815 01:29:41.767972   67451 kubeadm.go:740] duration metric: took 6.119306ms waiting for restarted kubelet to initialise ...
	I0815 01:29:41.767980   67451 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:41.774714   67451 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.782833   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.782861   67451 pod_ready.go:81] duration metric: took 8.124705ms for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.782870   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.782877   67451 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.790225   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.790248   67451 pod_ready.go:81] duration metric: took 7.36386ms for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.790259   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.790265   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.797569   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.797592   67451 pod_ready.go:81] duration metric: took 7.320672ms for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.797605   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.797611   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.891391   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.891423   67451 pod_ready.go:81] duration metric: took 93.801865ms for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.891435   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.891442   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:42.291752   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-proxy-s8mfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.291780   67451 pod_ready.go:81] duration metric: took 400.332851ms for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:42.291789   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-proxy-s8mfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.291795   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:42.691923   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.691958   67451 pod_ready.go:81] duration metric: took 400.15227ms for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:42.691970   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.691977   67451 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:43.091932   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:43.091958   67451 pod_ready.go:81] duration metric: took 399.974795ms for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:43.091970   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:43.091976   67451 pod_ready.go:38] duration metric: took 1.323989077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:43.091990   67451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:29:43.103131   67451 ops.go:34] apiserver oom_adj: -16
	I0815 01:29:43.103155   67451 kubeadm.go:597] duration metric: took 10.126043167s to restartPrimaryControlPlane
	I0815 01:29:43.103165   67451 kubeadm.go:394] duration metric: took 10.171275892s to StartCluster
	I0815 01:29:43.103183   67451 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:43.103269   67451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:29:43.105655   67451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:43.105963   67451 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:29:43.106027   67451 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:29:43.106123   67451 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106142   67451 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106162   67451 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.106178   67451 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:29:43.106187   67451 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106200   67451 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-018537"
	I0815 01:29:43.106226   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.106255   67451 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.106274   67451 addons.go:243] addon metrics-server should already be in state true
	I0815 01:29:43.106203   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:43.106363   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.106702   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106731   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.106708   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106789   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106822   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.106963   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.107834   67451 out.go:177] * Verifying Kubernetes components...
	I0815 01:29:43.109186   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:43.127122   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46271
	I0815 01:29:43.127378   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I0815 01:29:43.127380   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I0815 01:29:43.127678   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.127791   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.128078   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.128296   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.128323   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.128466   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.128480   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.128671   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.128844   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.129231   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.129263   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.129768   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.129817   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.130089   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.130125   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.130219   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.130448   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.134347   67451 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.134366   67451 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:29:43.134394   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.134764   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.134801   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.148352   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0815 01:29:43.148713   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0815 01:29:43.148786   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.149196   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.149378   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.149420   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.149838   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.149863   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.149891   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.150092   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.150344   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.150698   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.152063   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.152848   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.154165   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0815 01:29:43.154664   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.155020   67451 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:43.155087   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.155110   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.155596   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.156124   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.156166   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.156340   67451 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:29:43.156366   67451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:29:43.156389   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.157988   67451 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:29:43.159283   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:29:43.159299   67451 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:29:43.159319   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.159668   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.160304   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.160373   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.160866   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.161069   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.161234   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.161395   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.162257   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.162673   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.162702   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.162838   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.163007   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.163179   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.163296   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.175175   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0815 01:29:43.175674   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.176169   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.176193   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.176566   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.176824   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.178342   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.178584   67451 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:29:43.178597   67451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:29:43.178615   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.181058   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.181448   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.181482   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.181577   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.181709   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.181791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.181873   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.318078   67451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:43.341037   67451 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-018537" to be "Ready" ...
	I0815 01:29:43.400964   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:29:43.400993   67451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:29:43.423693   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:29:43.423716   67451 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:29:43.430460   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:29:43.453562   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:29:43.453587   67451 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:29:43.457038   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:29:43.495707   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:29:44.708047   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.25097545s)
	I0815 01:29:44.708106   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708111   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.212373458s)
	I0815 01:29:44.708119   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708129   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708141   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708135   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.277646183s)
	I0815 01:29:44.708182   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708201   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708391   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708409   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708419   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708428   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708531   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.708562   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708568   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708577   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.708586   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708587   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708599   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708605   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708613   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708648   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708614   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708678   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.710192   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.710210   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.710220   67451 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-018537"
	I0815 01:29:44.710196   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.710447   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.710467   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.716452   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.716468   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.716716   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.716737   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.718650   67451 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0815 01:29:44.719796   67451 addons.go:510] duration metric: took 1.613772622s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0815 01:29:40.514981   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.015724   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.515316   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.515738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.515747   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.015794   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:45.015384   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.000581   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:43.001092   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:43.001116   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:43.001045   68431 retry.go:31] will retry after 4.175502286s: waiting for machine to come up
	I0815 01:29:41.313801   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:43.814135   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:47.178102   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.178637   66492 main.go:141] libmachine: (no-preload-884893) Found IP for machine: 192.168.61.166
	I0815 01:29:47.178665   66492 main.go:141] libmachine: (no-preload-884893) Reserving static IP address...
	I0815 01:29:47.178678   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has current primary IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.179108   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "no-preload-884893", mac: "52:54:00:b7:93:c6", ip: "192.168.61.166"} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.179151   66492 main.go:141] libmachine: (no-preload-884893) DBG | skip adding static IP to network mk-no-preload-884893 - found existing host DHCP lease matching {name: "no-preload-884893", mac: "52:54:00:b7:93:c6", ip: "192.168.61.166"}
	I0815 01:29:47.179169   66492 main.go:141] libmachine: (no-preload-884893) Reserved static IP address: 192.168.61.166
	I0815 01:29:47.179188   66492 main.go:141] libmachine: (no-preload-884893) Waiting for SSH to be available...
	I0815 01:29:47.179204   66492 main.go:141] libmachine: (no-preload-884893) DBG | Getting to WaitForSSH function...
	I0815 01:29:47.181522   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.181909   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.181937   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.182038   66492 main.go:141] libmachine: (no-preload-884893) DBG | Using SSH client type: external
	I0815 01:29:47.182070   66492 main.go:141] libmachine: (no-preload-884893) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa (-rw-------)
	I0815 01:29:47.182105   66492 main.go:141] libmachine: (no-preload-884893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.166 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:47.182126   66492 main.go:141] libmachine: (no-preload-884893) DBG | About to run SSH command:
	I0815 01:29:47.182156   66492 main.go:141] libmachine: (no-preload-884893) DBG | exit 0
	I0815 01:29:47.309068   66492 main.go:141] libmachine: (no-preload-884893) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:47.309492   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetConfigRaw
	I0815 01:29:47.310181   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:47.312956   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.313296   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.313327   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.313503   66492 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/config.json ...
	I0815 01:29:47.313720   66492 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:47.313742   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:47.313965   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.315987   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.316252   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.316278   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.316399   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.316555   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.316741   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.316886   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.317071   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.317250   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.317263   66492 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:47.424862   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:47.424894   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.425125   66492 buildroot.go:166] provisioning hostname "no-preload-884893"
	I0815 01:29:47.425156   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.425353   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.428397   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.428802   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.428825   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.429003   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.429185   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.429336   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.429464   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.429650   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.429863   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.429881   66492 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-884893 && echo "no-preload-884893" | sudo tee /etc/hostname
	I0815 01:29:47.552134   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-884893
	
	I0815 01:29:47.552159   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.554997   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.555458   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.555500   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.555742   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.555975   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.556148   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.556320   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.556525   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.556707   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.556733   66492 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-884893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-884893/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-884893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:47.673572   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:47.673608   66492 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:47.673637   66492 buildroot.go:174] setting up certificates
	I0815 01:29:47.673653   66492 provision.go:84] configureAuth start
	I0815 01:29:47.673670   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.674016   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:47.677054   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.677491   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.677526   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.677588   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.680115   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.680510   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.680539   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.680719   66492 provision.go:143] copyHostCerts
	I0815 01:29:47.680772   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:47.680789   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:47.680846   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:47.680962   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:47.680970   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:47.680992   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:47.681057   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:47.681064   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:47.681081   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:47.681129   66492 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.no-preload-884893 san=[127.0.0.1 192.168.61.166 localhost minikube no-preload-884893]
	I0815 01:29:47.828342   66492 provision.go:177] copyRemoteCerts
	I0815 01:29:47.828395   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:47.828416   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.831163   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.831546   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.831576   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.831760   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.831948   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.832109   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.832218   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:47.914745   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:47.938252   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 01:29:47.960492   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:29:47.982681   66492 provision.go:87] duration metric: took 309.010268ms to configureAuth
	I0815 01:29:47.982714   66492 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:47.982971   66492 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:47.983095   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.985798   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.986181   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.986213   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.986383   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.986584   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.986748   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.986935   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.987115   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.987328   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.987346   66492 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:48.264004   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:48.264027   66492 machine.go:97] duration metric: took 950.293757ms to provisionDockerMachine
	I0815 01:29:48.264037   66492 start.go:293] postStartSetup for "no-preload-884893" (driver="kvm2")
	I0815 01:29:48.264047   66492 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:48.264060   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.264375   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:48.264401   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.267376   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.267859   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.267888   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.268115   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.268334   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.268521   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.268713   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.351688   66492 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:48.356871   66492 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:48.356897   66492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:48.356977   66492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:48.357078   66492 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:48.357194   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:48.369590   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:48.397339   66492 start.go:296] duration metric: took 133.287217ms for postStartSetup
	I0815 01:29:48.397389   66492 fix.go:56] duration metric: took 21.196078137s for fixHost
	I0815 01:29:48.397434   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.400353   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.400792   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.400831   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.401118   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.401352   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.401509   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.401707   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.401914   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:48.402132   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:48.402148   66492 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:48.518704   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685388.495787154
	
	I0815 01:29:48.518731   66492 fix.go:216] guest clock: 1723685388.495787154
	I0815 01:29:48.518743   66492 fix.go:229] Guest: 2024-08-15 01:29:48.495787154 +0000 UTC Remote: 2024-08-15 01:29:48.397394567 +0000 UTC m=+358.213942436 (delta=98.392587ms)
	I0815 01:29:48.518771   66492 fix.go:200] guest clock delta is within tolerance: 98.392587ms
	I0815 01:29:48.518779   66492 start.go:83] releasing machines lock for "no-preload-884893", held for 21.317569669s
	I0815 01:29:48.518808   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.519146   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:48.522001   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.522428   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.522461   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.522626   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523145   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523490   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523580   66492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:48.523634   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.523747   66492 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:48.523768   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.527031   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527128   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527408   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.527473   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527563   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.527592   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527709   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.527781   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.527943   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.528173   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.528177   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.528305   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.528417   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.528598   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.610614   66492 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:48.647464   66492 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:48.786666   66492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:48.792525   66492 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:48.792593   66492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:48.807904   66492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:48.807924   66492 start.go:495] detecting cgroup driver to use...
	I0815 01:29:48.807975   66492 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:48.826113   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:48.839376   66492 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:48.839443   66492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:48.852840   66492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:48.866029   66492 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:48.974628   66492 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:49.141375   66492 docker.go:233] disabling docker service ...
	I0815 01:29:49.141447   66492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:49.155650   66492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:49.168527   66492 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:49.295756   66492 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:49.430096   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:49.443508   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:49.460504   66492 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:49.460567   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.470309   66492 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:49.470376   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.480340   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.490326   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.500831   66492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:49.511629   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.522350   66492 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.541871   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.553334   66492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:49.562756   66492 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:49.562817   66492 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:49.575907   66492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:49.586017   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:49.709089   66492 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:49.848506   66492 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:49.848599   66492 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:49.853379   66492 start.go:563] Will wait 60s for crictl version
	I0815 01:29:49.853442   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:49.857695   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:49.897829   66492 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:49.897909   66492 ssh_runner.go:195] Run: crio --version
	I0815 01:29:49.927253   66492 ssh_runner.go:195] Run: crio --version
	I0815 01:29:49.956689   66492 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:45.345209   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:47.844877   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:49.845546   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:45.515828   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.015564   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.515829   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.014916   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.515308   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.014871   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.515182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.015946   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.514892   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.015788   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.957823   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:49.960376   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:49.960741   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:49.960771   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:49.960975   66492 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:49.964703   66492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:49.975918   66492 kubeadm.go:883] updating cluster {Name:no-preload-884893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:49.976078   66492 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:49.976130   66492 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:50.007973   66492 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:50.007997   66492 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:29:50.008034   66492 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:50.008076   66492 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.008092   66492 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.008147   66492 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 01:29:50.008167   66492 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.008238   66492 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.008261   66492 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.008535   66492 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.009666   66492 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.009734   66492 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.009745   66492 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:50.009748   66492 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.009734   66492 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.009768   66492 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.009775   66492 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.009801   66492 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 01:29:46.312368   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:48.312568   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.313249   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.347683   67451 node_ready.go:49] node "default-k8s-diff-port-018537" has status "Ready":"True"
	I0815 01:29:50.347704   67451 node_ready.go:38] duration metric: took 7.006638337s for node "default-k8s-diff-port-018537" to be "Ready" ...
	I0815 01:29:50.347713   67451 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:50.358505   67451 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.364110   67451 pod_ready.go:92] pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.364139   67451 pod_ready.go:81] duration metric: took 5.600464ms for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.364150   67451 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.370186   67451 pod_ready.go:92] pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.370212   67451 pod_ready.go:81] duration metric: took 6.054189ms for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.370223   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.380051   67451 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.380089   67451 pod_ready.go:81] duration metric: took 9.848463ms for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.380107   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.385988   67451 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.386015   67451 pod_ready.go:81] duration metric: took 2.005899675s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.386027   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.390635   67451 pod_ready.go:92] pod "kube-proxy-s8mfb" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.390654   67451 pod_ready.go:81] duration metric: took 4.620554ms for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.390663   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.745424   67451 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.745447   67451 pod_ready.go:81] duration metric: took 354.777631ms for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.745458   67451 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:54.752243   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.515037   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.015346   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.514948   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.015826   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.514876   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.015522   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.515665   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.015480   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.515202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:55.014921   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.224358   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.237723   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0815 01:29:50.240904   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.273259   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.275978   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.277287   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.293030   66492 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0815 01:29:50.293078   66492 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.293135   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.293169   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.425265   66492 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0815 01:29:50.425285   66492 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0815 01:29:50.425307   66492 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0815 01:29:50.425319   66492 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.425319   66492 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.425326   66492 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.425367   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425374   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425375   66492 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0815 01:29:50.425390   66492 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.425415   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425409   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425427   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.425436   66492 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0815 01:29:50.425451   66492 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.425471   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.438767   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.438827   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.477250   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.477290   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.477347   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.477399   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.507338   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.527412   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.618767   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.623557   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.623650   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.623741   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.623773   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.668092   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.738811   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.747865   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 01:29:50.747932   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.747953   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 01:29:50.747983   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.748016   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:50.748026   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.777047   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 01:29:50.777152   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:50.811559   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 01:29:50.811678   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:50.829106   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0815 01:29:50.829115   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0815 01:29:50.829131   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.829161   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0815 01:29:50.829178   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.829206   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:29:50.829276   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 01:29:50.829287   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0815 01:29:50.829319   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0815 01:29:50.829360   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:50.833595   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0815 01:29:50.869008   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.899406   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.070205124s)
	I0815 01:29:52.899446   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0815 01:29:52.899444   66492 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0: (2.070218931s)
	I0815 01:29:52.899466   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:52.899475   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0815 01:29:52.899477   66492 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.03044186s)
	I0815 01:29:52.899510   66492 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0815 01:29:52.899516   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:52.899534   66492 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.899573   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:54.750498   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.850957835s)
	I0815 01:29:54.750533   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0815 01:29:54.750530   66492 ssh_runner.go:235] Completed: which crictl: (1.850936309s)
	I0815 01:29:54.750567   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:54.750593   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:54.750609   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:54.787342   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.314561   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:54.813265   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:56.752530   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:58.752625   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:55.515921   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:55.516020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:55.556467   66919 cri.go:89] found id: ""
	I0815 01:29:55.556495   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.556506   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:55.556514   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:55.556584   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:55.591203   66919 cri.go:89] found id: ""
	I0815 01:29:55.591227   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.591234   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:55.591240   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:55.591319   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:55.628819   66919 cri.go:89] found id: ""
	I0815 01:29:55.628847   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.628858   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:55.628865   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:55.628934   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:55.673750   66919 cri.go:89] found id: ""
	I0815 01:29:55.673779   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.673790   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:55.673798   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:55.673857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:55.717121   66919 cri.go:89] found id: ""
	I0815 01:29:55.717153   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.717164   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:55.717171   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:55.717233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:55.753387   66919 cri.go:89] found id: ""
	I0815 01:29:55.753415   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.753425   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:55.753434   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:55.753507   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:55.787148   66919 cri.go:89] found id: ""
	I0815 01:29:55.787183   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.787194   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:55.787207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:55.787272   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:55.820172   66919 cri.go:89] found id: ""
	I0815 01:29:55.820212   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.820226   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:55.820238   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:55.820260   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:55.869089   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:55.869120   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:55.882614   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:55.882644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:56.004286   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:56.004364   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:56.004382   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:56.077836   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:56.077873   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:58.628976   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:58.642997   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:58.643074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:58.675870   66919 cri.go:89] found id: ""
	I0815 01:29:58.675906   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.675916   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:58.675921   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:58.675971   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:58.708231   66919 cri.go:89] found id: ""
	I0815 01:29:58.708263   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.708271   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:58.708277   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:58.708347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:58.744121   66919 cri.go:89] found id: ""
	I0815 01:29:58.744151   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.744162   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:58.744169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:58.744231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:58.783191   66919 cri.go:89] found id: ""
	I0815 01:29:58.783225   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.783238   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:58.783246   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:58.783315   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:58.821747   66919 cri.go:89] found id: ""
	I0815 01:29:58.821775   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.821785   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:58.821801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:58.821865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:58.859419   66919 cri.go:89] found id: ""
	I0815 01:29:58.859450   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.859458   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:58.859463   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:58.859520   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:58.900959   66919 cri.go:89] found id: ""
	I0815 01:29:58.900988   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.900999   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:58.901006   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:58.901069   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:58.940714   66919 cri.go:89] found id: ""
	I0815 01:29:58.940746   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.940758   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:58.940779   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:58.940796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:58.956973   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:58.957004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:59.024399   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:59.024426   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:59.024439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:59.106170   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:59.106210   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:59.142151   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:59.142181   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:56.948465   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.1978264s)
	I0815 01:29:56.948496   66492 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.161116111s)
	I0815 01:29:56.948602   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:56.948503   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0815 01:29:56.948644   66492 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:56.948718   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:56.985210   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 01:29:56.985331   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:29:58.731174   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.782427987s)
	I0815 01:29:58.731211   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0815 01:29:58.731234   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:58.731284   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:58.731184   66492 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.745828896s)
	I0815 01:29:58.731343   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0815 01:29:57.313743   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:59.814068   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:00.752802   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:02.752939   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:01.696371   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:01.709675   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:01.709748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:01.747907   66919 cri.go:89] found id: ""
	I0815 01:30:01.747934   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.747941   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:01.747949   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:01.748009   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:01.785404   66919 cri.go:89] found id: ""
	I0815 01:30:01.785429   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.785437   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:01.785442   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:01.785499   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:01.820032   66919 cri.go:89] found id: ""
	I0815 01:30:01.820060   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.820068   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:01.820073   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:01.820134   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:01.853219   66919 cri.go:89] found id: ""
	I0815 01:30:01.853257   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.853268   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:01.853276   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:01.853331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:01.895875   66919 cri.go:89] found id: ""
	I0815 01:30:01.895903   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.895915   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:01.895922   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:01.895983   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:01.929753   66919 cri.go:89] found id: ""
	I0815 01:30:01.929785   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.929796   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:01.929803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:01.929865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:01.961053   66919 cri.go:89] found id: ""
	I0815 01:30:01.961087   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.961099   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:01.961107   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:01.961174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:01.993217   66919 cri.go:89] found id: ""
	I0815 01:30:01.993247   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.993258   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:01.993268   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:01.993287   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:02.051367   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:02.051400   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:02.065818   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:02.065851   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:02.150692   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:02.150721   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:02.150738   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:02.262369   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:02.262406   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:04.813873   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:04.829471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:04.829549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:04.871020   66919 cri.go:89] found id: ""
	I0815 01:30:04.871049   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.871058   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:04.871064   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:04.871131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:04.924432   66919 cri.go:89] found id: ""
	I0815 01:30:04.924462   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.924474   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:04.924480   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:04.924543   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:04.972947   66919 cri.go:89] found id: ""
	I0815 01:30:04.972979   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.972991   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:04.972999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:04.973123   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:05.004748   66919 cri.go:89] found id: ""
	I0815 01:30:05.004772   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.004780   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:05.004785   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:05.004850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:05.036064   66919 cri.go:89] found id: ""
	I0815 01:30:05.036093   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.036103   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:05.036110   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:05.036174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:05.074397   66919 cri.go:89] found id: ""
	I0815 01:30:05.074430   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.074457   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:05.074467   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:05.074527   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:05.110796   66919 cri.go:89] found id: ""
	I0815 01:30:05.110821   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.110830   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:05.110836   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:05.110897   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:00.606670   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.875360613s)
	I0815 01:30:00.606701   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0815 01:30:00.606725   66492 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:30:00.606772   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:30:04.297747   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.690945823s)
	I0815 01:30:04.297780   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0815 01:30:04.297811   66492 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:30:04.297881   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:30:05.049009   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 01:30:05.049059   66492 cache_images.go:123] Successfully loaded all cached images
	I0815 01:30:05.049067   66492 cache_images.go:92] duration metric: took 15.041058069s to LoadCachedImages
	I0815 01:30:05.049083   66492 kubeadm.go:934] updating node { 192.168.61.166 8443 v1.31.0 crio true true} ...
	I0815 01:30:05.049215   66492 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-884893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:30:05.049295   66492 ssh_runner.go:195] Run: crio config
	I0815 01:30:05.101896   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:30:05.101915   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:30:05.101925   66492 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:30:05.101953   66492 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.166 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-884893 NodeName:no-preload-884893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:30:05.102129   66492 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.166
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-884893"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.166
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.166"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:30:05.102202   66492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:30:05.114396   66492 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:30:05.114464   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:30:05.124036   66492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0815 01:30:05.141411   66492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:30:05.156888   66492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0815 01:30:05.173630   66492 ssh_runner.go:195] Run: grep 192.168.61.166	control-plane.minikube.internal$ /etc/hosts
	I0815 01:30:05.177421   66492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.166	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:30:05.188839   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:30:02.313495   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:04.812529   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.252826   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:07.254206   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.753065   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.148938   66919 cri.go:89] found id: ""
	I0815 01:30:05.148960   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.148968   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:05.148976   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:05.148986   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:05.202523   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:05.202553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:05.215903   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:05.215935   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:05.294685   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:05.294709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:05.294724   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:05.397494   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:05.397529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:07.946734   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.967265   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:07.967341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:08.005761   66919 cri.go:89] found id: ""
	I0815 01:30:08.005792   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.005808   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:08.005814   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:08.005878   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:08.044124   66919 cri.go:89] found id: ""
	I0815 01:30:08.044154   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.044166   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:08.044173   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:08.044238   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:08.078729   66919 cri.go:89] found id: ""
	I0815 01:30:08.078757   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.078769   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:08.078777   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:08.078841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:08.121988   66919 cri.go:89] found id: ""
	I0815 01:30:08.122020   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.122035   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:08.122042   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:08.122108   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:08.156930   66919 cri.go:89] found id: ""
	I0815 01:30:08.156956   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.156964   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:08.156969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:08.157034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:08.201008   66919 cri.go:89] found id: ""
	I0815 01:30:08.201049   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.201060   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:08.201067   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:08.201128   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:08.241955   66919 cri.go:89] found id: ""
	I0815 01:30:08.241979   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.241987   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:08.241993   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:08.242041   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:08.277271   66919 cri.go:89] found id: ""
	I0815 01:30:08.277307   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.277317   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:08.277328   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:08.277343   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:08.339037   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:08.339082   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:08.355588   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:08.355617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:08.436131   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:08.436157   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:08.436170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:08.541231   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:08.541267   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:05.307306   66492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:30:05.326586   66492 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893 for IP: 192.168.61.166
	I0815 01:30:05.326606   66492 certs.go:194] generating shared ca certs ...
	I0815 01:30:05.326620   66492 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:30:05.326754   66492 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:30:05.326798   66492 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:30:05.326807   66492 certs.go:256] generating profile certs ...
	I0815 01:30:05.326885   66492 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.key
	I0815 01:30:05.326942   66492 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.key.2b09f8c1
	I0815 01:30:05.326975   66492 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.key
	I0815 01:30:05.327152   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:30:05.327216   66492 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:30:05.327231   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:30:05.327260   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:30:05.327292   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:30:05.327315   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:30:05.327353   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:30:05.328116   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:30:05.358988   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:30:05.386047   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:30:05.422046   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:30:05.459608   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 01:30:05.489226   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:30:05.518361   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:30:05.542755   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:30:05.567485   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:30:05.590089   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:30:05.614248   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:30:05.636932   66492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:30:05.652645   66492 ssh_runner.go:195] Run: openssl version
	I0815 01:30:05.658261   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:30:05.668530   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.673009   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.673091   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.678803   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:30:05.689237   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:30:05.699211   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.703378   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.703430   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.708890   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:30:05.718664   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:30:05.729058   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.733298   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.733352   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.738793   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:30:05.749007   66492 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:30:05.753780   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:30:05.759248   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:30:05.764978   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:30:05.770728   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:30:05.775949   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:30:05.781530   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:30:05.786881   66492 kubeadm.go:392] StartCluster: {Name:no-preload-884893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:30:05.786997   66492 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:30:05.787058   66492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:30:05.821591   66492 cri.go:89] found id: ""
	I0815 01:30:05.821662   66492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:30:05.832115   66492 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:30:05.832135   66492 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:30:05.832192   66492 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:30:05.841134   66492 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:30:05.842134   66492 kubeconfig.go:125] found "no-preload-884893" server: "https://192.168.61.166:8443"
	I0815 01:30:05.844248   66492 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:30:05.853112   66492 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.166
	I0815 01:30:05.853149   66492 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:30:05.853161   66492 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:30:05.853200   66492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:30:05.887518   66492 cri.go:89] found id: ""
	I0815 01:30:05.887591   66492 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:30:05.905394   66492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:30:05.914745   66492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:30:05.914763   66492 kubeadm.go:157] found existing configuration files:
	
	I0815 01:30:05.914812   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:30:05.924190   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:30:05.924244   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:30:05.933573   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:30:05.942352   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:30:05.942419   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:30:05.951109   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:30:05.959593   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:30:05.959656   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:30:05.968126   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:30:05.976084   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:30:05.976145   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:30:05.984770   66492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:30:05.993658   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:06.089280   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:06.949649   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.160787   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.231870   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.368542   66492 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:30:07.368644   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.868980   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:08.369588   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:08.395734   66492 api_server.go:72] duration metric: took 1.027190846s to wait for apiserver process to appear ...
	I0815 01:30:08.395760   66492 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:30:08.395782   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:07.313709   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.812159   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.394556   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.394591   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.394610   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.433312   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.433352   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.433366   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.450472   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.450507   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.895986   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.900580   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:30:11.900612   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:30:12.396449   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:12.402073   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:30:12.402097   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:30:12.896742   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:12.902095   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 200:
	ok
	I0815 01:30:12.909261   66492 api_server.go:141] control plane version: v1.31.0
	I0815 01:30:12.909292   66492 api_server.go:131] duration metric: took 4.513523262s to wait for apiserver health ...
	I0815 01:30:12.909304   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:30:12.909312   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:30:12.911002   66492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:30:12.252177   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:14.253401   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.090797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:11.105873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:11.105951   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:11.139481   66919 cri.go:89] found id: ""
	I0815 01:30:11.139509   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.139520   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:11.139528   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:11.139586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:11.176291   66919 cri.go:89] found id: ""
	I0815 01:30:11.176320   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.176329   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:11.176336   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:11.176408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:11.212715   66919 cri.go:89] found id: ""
	I0815 01:30:11.212750   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.212760   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:11.212766   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:11.212824   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:11.247283   66919 cri.go:89] found id: ""
	I0815 01:30:11.247311   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.247321   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:11.247328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:11.247391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:11.280285   66919 cri.go:89] found id: ""
	I0815 01:30:11.280319   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.280332   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:11.280339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:11.280407   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:11.317883   66919 cri.go:89] found id: ""
	I0815 01:30:11.317911   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.317930   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:11.317937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:11.317998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:11.355178   66919 cri.go:89] found id: ""
	I0815 01:30:11.355208   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.355220   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:11.355227   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:11.355287   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:11.390965   66919 cri.go:89] found id: ""
	I0815 01:30:11.390992   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.391004   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:11.391015   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:11.391030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:11.445967   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:11.446004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:11.460539   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:11.460570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:11.537022   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:11.537043   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:11.537058   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:11.625438   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:11.625476   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.175870   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:14.189507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:14.189576   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:14.225227   66919 cri.go:89] found id: ""
	I0815 01:30:14.225255   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.225264   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:14.225271   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:14.225350   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:14.260247   66919 cri.go:89] found id: ""
	I0815 01:30:14.260276   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.260286   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:14.260294   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:14.260364   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:14.295498   66919 cri.go:89] found id: ""
	I0815 01:30:14.295528   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.295538   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:14.295552   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:14.295617   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:14.334197   66919 cri.go:89] found id: ""
	I0815 01:30:14.334228   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.334239   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:14.334247   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:14.334308   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:14.376198   66919 cri.go:89] found id: ""
	I0815 01:30:14.376232   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.376244   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:14.376252   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:14.376313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:14.416711   66919 cri.go:89] found id: ""
	I0815 01:30:14.416744   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.416755   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:14.416763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:14.416823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:14.453890   66919 cri.go:89] found id: ""
	I0815 01:30:14.453917   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.453930   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:14.453952   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:14.454024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:14.497742   66919 cri.go:89] found id: ""
	I0815 01:30:14.497768   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.497776   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:14.497787   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:14.497803   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:14.511938   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:14.511980   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:14.583464   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:14.583490   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:14.583510   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:14.683497   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:14.683540   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.724290   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:14.724327   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:12.912470   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:30:12.924194   66492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:30:12.943292   66492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:30:12.957782   66492 system_pods.go:59] 8 kube-system pods found
	I0815 01:30:12.957825   66492 system_pods.go:61] "coredns-6f6b679f8f-flg2c" [637e4479-8f63-481a-b3d8-c5c4a35ca60a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:30:12.957836   66492 system_pods.go:61] "etcd-no-preload-884893" [f786f812-e4b8-41d4-bf09-1350fee38efb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:30:12.957848   66492 system_pods.go:61] "kube-apiserver-no-preload-884893" [128cfe47-3a25-4d2c-8869-0d2aafa69852] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:30:12.957859   66492 system_pods.go:61] "kube-controller-manager-no-preload-884893" [e1cce704-2092-4350-8b2d-a96b4cb90969] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:30:12.957870   66492 system_pods.go:61] "kube-proxy-l559z" [67d270af-bcf3-4c4a-a917-84a3b4477a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 01:30:12.957889   66492 system_pods.go:61] "kube-scheduler-no-preload-884893" [004b37a2-58c2-431d-b43e-de894b7fa8ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:30:12.957900   66492 system_pods.go:61] "metrics-server-6867b74b74-qnnqs" [397b72b1-60cb-41b6-88c4-cb0c3d9200da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:30:12.957909   66492 system_pods.go:61] "storage-provisioner" [bd489c40-fcf4-400d-af4c-913b511494e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 01:30:12.957919   66492 system_pods.go:74] duration metric: took 14.600496ms to wait for pod list to return data ...
	I0815 01:30:12.957934   66492 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:30:12.964408   66492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:30:12.964437   66492 node_conditions.go:123] node cpu capacity is 2
	I0815 01:30:12.964448   66492 node_conditions.go:105] duration metric: took 6.509049ms to run NodePressure ...
	I0815 01:30:12.964466   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:13.242145   66492 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:30:13.247986   66492 kubeadm.go:739] kubelet initialised
	I0815 01:30:13.248012   66492 kubeadm.go:740] duration metric: took 5.831891ms waiting for restarted kubelet to initialise ...
	I0815 01:30:13.248021   66492 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:30:13.254140   66492 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.260351   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.260378   66492 pod_ready.go:81] duration metric: took 6.20764ms for pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.260388   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.260408   66492 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.265440   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "etcd-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.265464   66492 pod_ready.go:81] duration metric: took 5.046431ms for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.265474   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "etcd-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.265481   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.271153   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "kube-apiserver-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.271173   66492 pod_ready.go:81] duration metric: took 5.686045ms for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.271181   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "kube-apiserver-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.271187   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.346976   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.347001   66492 pod_ready.go:81] duration metric: took 75.806932ms for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.347011   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.347018   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l559z" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.748456   66492 pod_ready.go:92] pod "kube-proxy-l559z" in "kube-system" namespace has status "Ready":"True"
	I0815 01:30:13.748480   66492 pod_ready.go:81] duration metric: took 401.453111ms for pod "kube-proxy-l559z" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.748491   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:11.812458   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:13.813405   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:16.752797   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:19.251123   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:17.277116   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:17.290745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:17.290825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:17.324477   66919 cri.go:89] found id: ""
	I0815 01:30:17.324505   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.324512   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:17.324517   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:17.324573   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:17.356340   66919 cri.go:89] found id: ""
	I0815 01:30:17.356373   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.356384   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:17.356392   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:17.356452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:17.392696   66919 cri.go:89] found id: ""
	I0815 01:30:17.392722   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.392732   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:17.392740   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:17.392802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:17.425150   66919 cri.go:89] found id: ""
	I0815 01:30:17.425182   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.425192   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:17.425200   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:17.425266   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:17.460679   66919 cri.go:89] found id: ""
	I0815 01:30:17.460708   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.460720   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:17.460727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:17.460805   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:17.496881   66919 cri.go:89] found id: ""
	I0815 01:30:17.496914   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.496927   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:17.496933   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:17.496985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:17.528614   66919 cri.go:89] found id: ""
	I0815 01:30:17.528643   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.528668   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:17.528676   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:17.528736   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:17.563767   66919 cri.go:89] found id: ""
	I0815 01:30:17.563792   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.563799   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:17.563809   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:17.563824   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:17.576591   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:17.576619   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:17.647791   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:17.647819   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:17.647832   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:17.722889   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:17.722927   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:17.761118   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:17.761154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:15.756386   66492 pod_ready.go:102] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.255794   66492 pod_ready.go:102] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:19.754538   66492 pod_ready.go:92] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:30:19.754560   66492 pod_ready.go:81] duration metric: took 6.006061814s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:19.754569   66492 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:16.313295   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.313960   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:21.252528   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.753406   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:20.316550   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:20.329377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:20.329452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:20.361773   66919 cri.go:89] found id: ""
	I0815 01:30:20.361805   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.361814   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:20.361820   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:20.361880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:20.394091   66919 cri.go:89] found id: ""
	I0815 01:30:20.394127   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.394138   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:20.394145   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:20.394210   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:20.426882   66919 cri.go:89] found id: ""
	I0815 01:30:20.426910   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.426929   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:20.426937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:20.426998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:20.460629   66919 cri.go:89] found id: ""
	I0815 01:30:20.460678   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.460692   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:20.460699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:20.460764   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:20.492030   66919 cri.go:89] found id: ""
	I0815 01:30:20.492055   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.492063   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:20.492069   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:20.492127   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:20.523956   66919 cri.go:89] found id: ""
	I0815 01:30:20.523986   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.523994   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:20.523999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:20.524058   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:20.556577   66919 cri.go:89] found id: ""
	I0815 01:30:20.556606   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.556617   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:20.556633   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:20.556714   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:20.589322   66919 cri.go:89] found id: ""
	I0815 01:30:20.589357   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.589366   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:20.589374   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:20.589386   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:20.666950   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:20.666993   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:20.703065   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:20.703104   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:20.758120   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:20.758154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:20.773332   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:20.773378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:20.839693   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.340487   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:23.352978   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:23.353034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:23.386376   66919 cri.go:89] found id: ""
	I0815 01:30:23.386401   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.386411   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:23.386418   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:23.386480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:23.422251   66919 cri.go:89] found id: ""
	I0815 01:30:23.422275   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.422283   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:23.422288   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:23.422347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:23.454363   66919 cri.go:89] found id: ""
	I0815 01:30:23.454394   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.454405   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:23.454410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:23.454471   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:23.487211   66919 cri.go:89] found id: ""
	I0815 01:30:23.487240   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.487249   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:23.487255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:23.487313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:23.518655   66919 cri.go:89] found id: ""
	I0815 01:30:23.518680   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.518690   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:23.518695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:23.518749   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:23.553449   66919 cri.go:89] found id: ""
	I0815 01:30:23.553479   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.553489   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:23.553497   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:23.553549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:23.582407   66919 cri.go:89] found id: ""
	I0815 01:30:23.582443   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.582459   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:23.582466   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:23.582519   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:23.612805   66919 cri.go:89] found id: ""
	I0815 01:30:23.612839   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.612849   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:23.612861   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:23.612874   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:23.661661   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:23.661691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:23.674456   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:23.674491   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:23.742734   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.742758   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:23.742772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:23.828791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:23.828830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:21.761680   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.763406   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:20.812796   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.312044   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:25.312289   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:26.252305   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:28.752410   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:26.364924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:26.378354   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:26.378422   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:26.410209   66919 cri.go:89] found id: ""
	I0815 01:30:26.410238   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.410248   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:26.410253   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:26.410299   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:26.443885   66919 cri.go:89] found id: ""
	I0815 01:30:26.443918   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.443929   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:26.443935   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:26.443985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:26.475786   66919 cri.go:89] found id: ""
	I0815 01:30:26.475815   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.475826   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:26.475833   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:26.475898   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:26.510635   66919 cri.go:89] found id: ""
	I0815 01:30:26.510660   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.510669   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:26.510677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:26.510739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:26.542755   66919 cri.go:89] found id: ""
	I0815 01:30:26.542779   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.542787   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:26.542792   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:26.542842   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:26.574825   66919 cri.go:89] found id: ""
	I0815 01:30:26.574896   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.574911   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:26.574919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:26.574979   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:26.612952   66919 cri.go:89] found id: ""
	I0815 01:30:26.612980   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.612991   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:26.612998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:26.613067   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:26.645339   66919 cri.go:89] found id: ""
	I0815 01:30:26.645377   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.645388   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:26.645398   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:26.645415   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:26.659206   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:26.659243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:26.727526   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:26.727552   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:26.727569   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.811277   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:26.811314   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:26.851236   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:26.851270   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.402571   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:29.415017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:29.415095   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:29.448130   66919 cri.go:89] found id: ""
	I0815 01:30:29.448151   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.448159   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:29.448164   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:29.448213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:29.484156   66919 cri.go:89] found id: ""
	I0815 01:30:29.484186   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.484195   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:29.484200   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:29.484248   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:29.519760   66919 cri.go:89] found id: ""
	I0815 01:30:29.519796   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.519806   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:29.519812   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:29.519864   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:29.551336   66919 cri.go:89] found id: ""
	I0815 01:30:29.551363   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.551372   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:29.551377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:29.551428   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:29.584761   66919 cri.go:89] found id: ""
	I0815 01:30:29.584793   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.584804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:29.584811   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:29.584875   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:29.619310   66919 cri.go:89] found id: ""
	I0815 01:30:29.619335   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.619343   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:29.619351   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:29.619408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:29.653976   66919 cri.go:89] found id: ""
	I0815 01:30:29.654005   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.654016   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:29.654030   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:29.654104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:29.685546   66919 cri.go:89] found id: ""
	I0815 01:30:29.685581   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.685588   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:29.685598   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:29.685613   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:29.720766   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:29.720797   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.771174   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:29.771207   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:29.783951   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:29.783979   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:29.853602   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:29.853622   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:29.853634   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.259774   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:28.260345   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:27.312379   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:29.312991   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:31.253803   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:33.752012   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.434032   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:32.447831   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:32.447900   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:32.479056   66919 cri.go:89] found id: ""
	I0815 01:30:32.479086   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.479096   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:32.479102   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:32.479167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:32.511967   66919 cri.go:89] found id: ""
	I0815 01:30:32.512002   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.512014   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:32.512022   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:32.512094   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:32.547410   66919 cri.go:89] found id: ""
	I0815 01:30:32.547433   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.547441   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:32.547446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:32.547494   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:32.580829   66919 cri.go:89] found id: ""
	I0815 01:30:32.580857   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.580867   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:32.580874   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:32.580941   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:32.613391   66919 cri.go:89] found id: ""
	I0815 01:30:32.613502   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.613518   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:32.613529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:32.613619   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:32.645703   66919 cri.go:89] found id: ""
	I0815 01:30:32.645736   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.645747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:32.645754   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:32.645822   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:32.677634   66919 cri.go:89] found id: ""
	I0815 01:30:32.677667   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.677678   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:32.677685   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:32.677740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:32.708400   66919 cri.go:89] found id: ""
	I0815 01:30:32.708481   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.708506   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:32.708521   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:32.708538   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:32.759869   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:32.759907   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:32.773110   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:32.773131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:32.840010   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:32.840031   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:32.840045   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:32.915894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:32.915948   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:30.261620   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.760735   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:34.761802   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:31.813543   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:33.813715   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.752452   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:37.752484   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.752536   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.461001   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:35.473803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:35.473874   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:35.506510   66919 cri.go:89] found id: ""
	I0815 01:30:35.506532   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.506540   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:35.506546   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:35.506593   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:35.540988   66919 cri.go:89] found id: ""
	I0815 01:30:35.541018   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.541028   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:35.541033   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:35.541084   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:35.575687   66919 cri.go:89] found id: ""
	I0815 01:30:35.575713   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.575723   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:35.575730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:35.575789   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:35.606841   66919 cri.go:89] found id: ""
	I0815 01:30:35.606871   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.606878   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:35.606884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:35.606940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:35.641032   66919 cri.go:89] found id: ""
	I0815 01:30:35.641067   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.641079   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:35.641086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:35.641150   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:35.676347   66919 cri.go:89] found id: ""
	I0815 01:30:35.676381   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.676422   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:35.676433   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:35.676497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:35.713609   66919 cri.go:89] found id: ""
	I0815 01:30:35.713634   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.713648   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:35.713655   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:35.713739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:35.751057   66919 cri.go:89] found id: ""
	I0815 01:30:35.751083   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.751094   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:35.751104   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:35.751119   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:35.822909   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:35.822935   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:35.822950   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:35.904146   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:35.904186   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:35.942285   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:35.942316   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:35.990920   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:35.990959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.504900   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:38.518230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:38.518301   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:38.552402   66919 cri.go:89] found id: ""
	I0815 01:30:38.552428   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.552436   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:38.552441   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:38.552500   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:38.588617   66919 cri.go:89] found id: ""
	I0815 01:30:38.588643   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.588668   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:38.588677   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:38.588740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:38.621168   66919 cri.go:89] found id: ""
	I0815 01:30:38.621196   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.621204   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:38.621210   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:38.621258   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:38.654522   66919 cri.go:89] found id: ""
	I0815 01:30:38.654550   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.654559   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:38.654565   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:38.654631   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:38.688710   66919 cri.go:89] found id: ""
	I0815 01:30:38.688735   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.688743   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:38.688748   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:38.688802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:38.720199   66919 cri.go:89] found id: ""
	I0815 01:30:38.720224   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.720235   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:38.720242   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:38.720304   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:38.753996   66919 cri.go:89] found id: ""
	I0815 01:30:38.754026   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.754036   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:38.754043   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:38.754102   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:38.787488   66919 cri.go:89] found id: ""
	I0815 01:30:38.787514   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.787522   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:38.787530   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:38.787542   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:38.840062   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:38.840092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.854501   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:38.854543   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:38.933715   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:38.933749   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:38.933766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:39.010837   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:39.010871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:37.260918   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.263490   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.816265   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:38.313383   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:42.252613   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:44.751882   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:41.552027   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:41.566058   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:41.566136   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:41.603076   66919 cri.go:89] found id: ""
	I0815 01:30:41.603110   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.603123   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:41.603132   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:41.603201   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:41.637485   66919 cri.go:89] found id: ""
	I0815 01:30:41.637524   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.637536   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:41.637543   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:41.637609   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:41.671313   66919 cri.go:89] found id: ""
	I0815 01:30:41.671337   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.671345   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:41.671350   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:41.671399   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:41.704715   66919 cri.go:89] found id: ""
	I0815 01:30:41.704741   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.704752   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:41.704759   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:41.704821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:41.736357   66919 cri.go:89] found id: ""
	I0815 01:30:41.736388   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.736398   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:41.736405   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:41.736465   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:41.770373   66919 cri.go:89] found id: ""
	I0815 01:30:41.770401   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.770409   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:41.770415   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:41.770463   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:41.805965   66919 cri.go:89] found id: ""
	I0815 01:30:41.805990   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.805998   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:41.806003   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:41.806054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:41.841753   66919 cri.go:89] found id: ""
	I0815 01:30:41.841778   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.841786   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:41.841794   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:41.841805   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:41.914515   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:41.914539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:41.914557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:41.988345   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:41.988380   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:42.023814   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:42.023841   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:42.075210   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:42.075243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.589738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:44.602604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:44.602663   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:44.634203   66919 cri.go:89] found id: ""
	I0815 01:30:44.634236   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.634247   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:44.634254   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:44.634341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:44.683449   66919 cri.go:89] found id: ""
	I0815 01:30:44.683480   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.683490   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:44.683495   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:44.683563   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:44.716485   66919 cri.go:89] found id: ""
	I0815 01:30:44.716509   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.716520   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:44.716527   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:44.716595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:44.755708   66919 cri.go:89] found id: ""
	I0815 01:30:44.755737   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.755746   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:44.755755   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:44.755823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:44.791754   66919 cri.go:89] found id: ""
	I0815 01:30:44.791781   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.791790   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:44.791796   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:44.791867   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:44.825331   66919 cri.go:89] found id: ""
	I0815 01:30:44.825355   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.825363   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:44.825369   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:44.825416   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:44.861680   66919 cri.go:89] found id: ""
	I0815 01:30:44.861705   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.861713   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:44.861718   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:44.861770   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:44.898810   66919 cri.go:89] found id: ""
	I0815 01:30:44.898844   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.898857   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:44.898867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:44.898881   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:44.949416   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:44.949449   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.964230   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:44.964258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:45.038989   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:45.039012   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:45.039027   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:45.116311   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:45.116345   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:41.760941   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:43.764802   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:40.811825   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:42.813489   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:45.312497   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:46.753090   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:49.252535   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.658176   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:47.671312   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:47.671375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:47.705772   66919 cri.go:89] found id: ""
	I0815 01:30:47.705800   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.705812   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:47.705819   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:47.705882   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:47.737812   66919 cri.go:89] found id: ""
	I0815 01:30:47.737846   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.737857   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:47.737864   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:47.737928   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:47.773079   66919 cri.go:89] found id: ""
	I0815 01:30:47.773103   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.773114   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:47.773121   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:47.773184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:47.804941   66919 cri.go:89] found id: ""
	I0815 01:30:47.804970   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.804980   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:47.804990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:47.805053   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:47.841215   66919 cri.go:89] found id: ""
	I0815 01:30:47.841249   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.841260   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:47.841266   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:47.841322   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:47.872730   66919 cri.go:89] found id: ""
	I0815 01:30:47.872761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.872772   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:47.872780   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:47.872833   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:47.905731   66919 cri.go:89] found id: ""
	I0815 01:30:47.905761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.905769   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:47.905774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:47.905825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:47.939984   66919 cri.go:89] found id: ""
	I0815 01:30:47.940017   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.940028   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:47.940040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:47.940053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:47.989493   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:47.989526   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:48.002567   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:48.002605   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:48.066691   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:48.066709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:48.066720   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:48.142512   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:48.142551   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:46.260920   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:48.761706   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.813316   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:50.311266   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:51.253220   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.751360   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:50.681288   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:50.695289   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:50.695358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:50.729264   66919 cri.go:89] found id: ""
	I0815 01:30:50.729293   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.729303   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:50.729310   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:50.729374   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:50.765308   66919 cri.go:89] found id: ""
	I0815 01:30:50.765337   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.765348   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:50.765354   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:50.765421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:50.801332   66919 cri.go:89] found id: ""
	I0815 01:30:50.801362   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.801382   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:50.801391   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:50.801452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:50.834822   66919 cri.go:89] found id: ""
	I0815 01:30:50.834855   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.834866   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:50.834873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:50.834937   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:50.868758   66919 cri.go:89] found id: ""
	I0815 01:30:50.868785   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.868804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:50.868817   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:50.868886   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:50.902003   66919 cri.go:89] found id: ""
	I0815 01:30:50.902035   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.902046   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:50.902053   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:50.902113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:50.934517   66919 cri.go:89] found id: ""
	I0815 01:30:50.934546   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.934562   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:50.934569   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:50.934628   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:50.968195   66919 cri.go:89] found id: ""
	I0815 01:30:50.968224   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.968233   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:50.968244   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:50.968258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:51.019140   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:51.019176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:51.032046   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:51.032072   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:51.109532   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:51.109555   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:51.109571   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:51.186978   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:51.187021   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:53.734145   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:53.747075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:53.747146   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:53.779774   66919 cri.go:89] found id: ""
	I0815 01:30:53.779800   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.779807   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:53.779812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:53.779861   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:53.813079   66919 cri.go:89] found id: ""
	I0815 01:30:53.813119   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.813130   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:53.813137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:53.813198   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:53.847148   66919 cri.go:89] found id: ""
	I0815 01:30:53.847179   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.847188   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:53.847195   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:53.847261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:53.880562   66919 cri.go:89] found id: ""
	I0815 01:30:53.880589   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.880596   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:53.880604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:53.880666   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:53.913334   66919 cri.go:89] found id: ""
	I0815 01:30:53.913364   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.913372   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:53.913378   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:53.913436   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:53.946008   66919 cri.go:89] found id: ""
	I0815 01:30:53.946042   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.946052   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:53.946057   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:53.946111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:53.978557   66919 cri.go:89] found id: ""
	I0815 01:30:53.978586   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.978595   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:53.978600   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:53.978653   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:54.010358   66919 cri.go:89] found id: ""
	I0815 01:30:54.010385   66919 logs.go:276] 0 containers: []
	W0815 01:30:54.010392   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:54.010401   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:54.010413   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:54.059780   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:54.059815   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:54.073397   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:54.073428   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:54.140996   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:54.141024   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:54.141039   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:54.215401   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:54.215437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:51.261078   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.261318   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:52.315214   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:54.813501   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:55.751557   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.766434   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:56.756848   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:56.769371   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:56.769434   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:56.806021   66919 cri.go:89] found id: ""
	I0815 01:30:56.806046   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.806076   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:56.806100   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:56.806170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:56.855347   66919 cri.go:89] found id: ""
	I0815 01:30:56.855377   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.855393   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:56.855400   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:56.855464   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:56.898669   66919 cri.go:89] found id: ""
	I0815 01:30:56.898700   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.898710   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:56.898717   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:56.898785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:56.955078   66919 cri.go:89] found id: ""
	I0815 01:30:56.955112   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.955124   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:56.955131   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:56.955205   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:56.987638   66919 cri.go:89] found id: ""
	I0815 01:30:56.987666   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.987674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:56.987680   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:56.987729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:57.019073   66919 cri.go:89] found id: ""
	I0815 01:30:57.019101   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.019109   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:57.019114   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:57.019170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:57.051695   66919 cri.go:89] found id: ""
	I0815 01:30:57.051724   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.051735   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:57.051742   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:57.051804   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:57.085066   66919 cri.go:89] found id: ""
	I0815 01:30:57.085095   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.085106   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:57.085117   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:57.085131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:57.134043   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:57.134080   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:57.147838   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:57.147871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:57.221140   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:57.221174   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:57.221190   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:57.302571   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:57.302607   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:59.841296   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:59.854638   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:59.854700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:59.885940   66919 cri.go:89] found id: ""
	I0815 01:30:59.885963   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.885971   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:59.885976   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:59.886026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:59.918783   66919 cri.go:89] found id: ""
	I0815 01:30:59.918812   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.918824   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:59.918832   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:59.918905   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:59.952122   66919 cri.go:89] found id: ""
	I0815 01:30:59.952153   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.952163   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:59.952169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:59.952233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:59.987303   66919 cri.go:89] found id: ""
	I0815 01:30:59.987331   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.987339   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:59.987344   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:59.987410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:00.024606   66919 cri.go:89] found id: ""
	I0815 01:31:00.024640   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.024666   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:00.024677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:00.024738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:00.055993   66919 cri.go:89] found id: ""
	I0815 01:31:00.056020   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.056031   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:00.056039   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:00.056104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:00.087128   66919 cri.go:89] found id: ""
	I0815 01:31:00.087161   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.087173   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:00.087180   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:00.087249   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:00.120436   66919 cri.go:89] found id: ""
	I0815 01:31:00.120465   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.120476   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:00.120488   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:00.120503   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:55.261504   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.762139   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.312874   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:59.811724   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.252248   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.751908   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.133810   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:00.133838   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:00.199949   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:00.199971   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:00.199984   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:00.284740   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:00.284778   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.321791   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:00.321827   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:02.873253   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:02.885846   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:02.885925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:02.924698   66919 cri.go:89] found id: ""
	I0815 01:31:02.924727   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.924739   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:02.924745   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:02.924807   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:02.961352   66919 cri.go:89] found id: ""
	I0815 01:31:02.961383   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.961391   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:02.961396   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:02.961450   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:02.996293   66919 cri.go:89] found id: ""
	I0815 01:31:02.996327   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.996334   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:02.996341   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:02.996391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:03.028976   66919 cri.go:89] found id: ""
	I0815 01:31:03.029005   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.029013   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:03.029019   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:03.029066   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:03.063388   66919 cri.go:89] found id: ""
	I0815 01:31:03.063425   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.063436   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:03.063445   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:03.063518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:03.099730   66919 cri.go:89] found id: ""
	I0815 01:31:03.099757   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.099767   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:03.099778   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:03.099841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:03.132347   66919 cri.go:89] found id: ""
	I0815 01:31:03.132370   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.132380   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:03.132386   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:03.132495   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:03.165120   66919 cri.go:89] found id: ""
	I0815 01:31:03.165146   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.165153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:03.165161   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:03.165173   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:03.217544   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:03.217576   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:03.232299   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:03.232341   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:03.297458   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:03.297484   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:03.297500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:03.377304   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:03.377338   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.261621   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.760996   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:04.762492   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:01.814111   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:04.311963   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.251139   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:07.252081   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:09.253611   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.915544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:05.929154   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:05.929231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:05.972008   66919 cri.go:89] found id: ""
	I0815 01:31:05.972037   66919 logs.go:276] 0 containers: []
	W0815 01:31:05.972048   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:05.972055   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:05.972119   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:06.005459   66919 cri.go:89] found id: ""
	I0815 01:31:06.005486   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.005494   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:06.005499   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:06.005550   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:06.037623   66919 cri.go:89] found id: ""
	I0815 01:31:06.037655   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.037666   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:06.037674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:06.037733   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:06.070323   66919 cri.go:89] found id: ""
	I0815 01:31:06.070347   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.070356   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:06.070361   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:06.070419   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:06.103570   66919 cri.go:89] found id: ""
	I0815 01:31:06.103593   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.103601   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:06.103606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:06.103654   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:06.136253   66919 cri.go:89] found id: ""
	I0815 01:31:06.136281   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.136291   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:06.136297   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:06.136356   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:06.170851   66919 cri.go:89] found id: ""
	I0815 01:31:06.170878   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.170890   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:06.170895   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:06.170942   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:06.205836   66919 cri.go:89] found id: ""
	I0815 01:31:06.205860   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.205867   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:06.205876   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:06.205892   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:06.282838   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:06.282872   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:06.323867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:06.323898   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:06.378187   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:06.378230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:06.393126   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:06.393160   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:06.460898   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:08.961182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:08.973963   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:08.974048   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:09.007466   66919 cri.go:89] found id: ""
	I0815 01:31:09.007494   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.007502   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:09.007509   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:09.007567   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:09.045097   66919 cri.go:89] found id: ""
	I0815 01:31:09.045123   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.045131   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:09.045137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:09.045187   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:09.078326   66919 cri.go:89] found id: ""
	I0815 01:31:09.078356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.078380   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:09.078389   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:09.078455   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:09.109430   66919 cri.go:89] found id: ""
	I0815 01:31:09.109460   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.109471   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:09.109478   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:09.109544   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:09.143200   66919 cri.go:89] found id: ""
	I0815 01:31:09.143225   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.143234   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:09.143239   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:09.143306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:09.179057   66919 cri.go:89] found id: ""
	I0815 01:31:09.179081   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.179089   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:09.179095   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:09.179141   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:09.213327   66919 cri.go:89] found id: ""
	I0815 01:31:09.213356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.213368   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:09.213375   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:09.213425   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:09.246716   66919 cri.go:89] found id: ""
	I0815 01:31:09.246745   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.246756   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:09.246763   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:09.246775   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:09.299075   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:09.299105   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:09.313023   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:09.313054   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:09.377521   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:09.377545   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:09.377557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:09.453791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:09.453830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:07.260671   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:09.261005   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:06.313082   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:08.812290   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.753344   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.251251   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.991473   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:12.004615   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:12.004707   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:12.045028   66919 cri.go:89] found id: ""
	I0815 01:31:12.045057   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.045066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:12.045072   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:12.045121   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:12.077887   66919 cri.go:89] found id: ""
	I0815 01:31:12.077910   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.077920   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:12.077926   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:12.077974   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:12.110214   66919 cri.go:89] found id: ""
	I0815 01:31:12.110249   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.110260   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:12.110268   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:12.110328   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:12.142485   66919 cri.go:89] found id: ""
	I0815 01:31:12.142509   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.142516   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:12.142522   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:12.142572   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:12.176921   66919 cri.go:89] found id: ""
	I0815 01:31:12.176951   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.176962   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:12.176969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:12.177030   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:12.212093   66919 cri.go:89] found id: ""
	I0815 01:31:12.212142   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.212154   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:12.212162   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:12.212216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:12.246980   66919 cri.go:89] found id: ""
	I0815 01:31:12.247007   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.247017   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:12.247024   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:12.247082   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:12.280888   66919 cri.go:89] found id: ""
	I0815 01:31:12.280918   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.280931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:12.280943   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:12.280959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:12.333891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:12.333923   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:12.346753   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:12.346783   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:12.415652   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:12.415675   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:12.415692   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:12.494669   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:12.494706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.031185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:15.044605   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:15.044704   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:15.081810   66919 cri.go:89] found id: ""
	I0815 01:31:15.081846   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.081860   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:15.081869   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:15.081932   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:15.113517   66919 cri.go:89] found id: ""
	I0815 01:31:15.113550   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.113562   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:15.113568   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:15.113641   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:11.762158   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.260892   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.314672   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:13.811754   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:16.751293   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.752458   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.147638   66919 cri.go:89] found id: ""
	I0815 01:31:15.147665   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.147673   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:15.147679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:15.147746   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:15.178938   66919 cri.go:89] found id: ""
	I0815 01:31:15.178966   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.178976   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:15.178990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:15.179054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:15.212304   66919 cri.go:89] found id: ""
	I0815 01:31:15.212333   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.212346   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:15.212353   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:15.212414   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:15.245991   66919 cri.go:89] found id: ""
	I0815 01:31:15.246012   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.246019   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:15.246025   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:15.246074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:15.280985   66919 cri.go:89] found id: ""
	I0815 01:31:15.281016   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.281034   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:15.281041   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:15.281105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:15.315902   66919 cri.go:89] found id: ""
	I0815 01:31:15.315939   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.315948   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:15.315958   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:15.315973   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:15.329347   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:15.329375   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:15.400366   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:15.400388   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:15.400405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:15.479074   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:15.479118   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.516204   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:15.516230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.070588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:18.083120   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:18.083196   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:18.115673   66919 cri.go:89] found id: ""
	I0815 01:31:18.115701   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.115709   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:18.115715   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:18.115772   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:18.147011   66919 cri.go:89] found id: ""
	I0815 01:31:18.147039   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.147047   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:18.147053   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:18.147126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:18.179937   66919 cri.go:89] found id: ""
	I0815 01:31:18.179960   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.179968   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:18.179973   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:18.180032   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:18.214189   66919 cri.go:89] found id: ""
	I0815 01:31:18.214216   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.214224   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:18.214230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:18.214289   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:18.252102   66919 cri.go:89] found id: ""
	I0815 01:31:18.252130   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.252137   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:18.252143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:18.252204   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:18.285481   66919 cri.go:89] found id: ""
	I0815 01:31:18.285519   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.285529   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:18.285536   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:18.285599   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:18.321609   66919 cri.go:89] found id: ""
	I0815 01:31:18.321636   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.321651   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:18.321660   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:18.321723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:18.352738   66919 cri.go:89] found id: ""
	I0815 01:31:18.352766   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.352774   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:18.352782   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:18.352796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.401481   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:18.401517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:18.414984   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:18.415016   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:18.485539   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:18.485559   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:18.485579   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:18.569611   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:18.569651   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:16.262086   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.760590   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.812958   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:17.813230   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:20.312988   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.255232   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.751939   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.109609   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:21.123972   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:21.124038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:21.157591   66919 cri.go:89] found id: ""
	I0815 01:31:21.157624   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.157636   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:21.157643   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:21.157700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:21.192506   66919 cri.go:89] found id: ""
	I0815 01:31:21.192535   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.192545   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:21.192552   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:21.192623   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:21.224873   66919 cri.go:89] found id: ""
	I0815 01:31:21.224901   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.224912   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:21.224919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:21.224980   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:21.258398   66919 cri.go:89] found id: ""
	I0815 01:31:21.258427   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.258438   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:21.258446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:21.258513   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:21.295754   66919 cri.go:89] found id: ""
	I0815 01:31:21.295781   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.295792   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:21.295799   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:21.295870   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:21.330174   66919 cri.go:89] found id: ""
	I0815 01:31:21.330195   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.330202   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:21.330207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:21.330255   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:21.364565   66919 cri.go:89] found id: ""
	I0815 01:31:21.364588   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.364596   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:21.364639   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:21.364717   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:21.397889   66919 cri.go:89] found id: ""
	I0815 01:31:21.397920   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.397931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:21.397942   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:21.397961   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.471788   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:21.471822   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:21.508837   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:21.508867   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:21.560538   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:21.560575   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:21.575581   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:21.575622   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:21.647798   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.148566   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:24.160745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:24.160813   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:24.192535   66919 cri.go:89] found id: ""
	I0815 01:31:24.192558   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.192566   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:24.192572   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:24.192630   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:24.223468   66919 cri.go:89] found id: ""
	I0815 01:31:24.223499   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.223507   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:24.223513   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:24.223561   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:24.258905   66919 cri.go:89] found id: ""
	I0815 01:31:24.258931   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.258938   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:24.258944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:24.259006   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:24.298914   66919 cri.go:89] found id: ""
	I0815 01:31:24.298942   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.298949   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:24.298955   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:24.299011   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:24.331962   66919 cri.go:89] found id: ""
	I0815 01:31:24.331992   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.332003   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:24.332011   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:24.332078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:24.365984   66919 cri.go:89] found id: ""
	I0815 01:31:24.366014   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.366022   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:24.366028   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:24.366078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:24.402397   66919 cri.go:89] found id: ""
	I0815 01:31:24.402432   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.402442   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:24.402450   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:24.402516   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:24.434662   66919 cri.go:89] found id: ""
	I0815 01:31:24.434691   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.434704   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:24.434714   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:24.434730   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:24.474087   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:24.474117   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:24.524494   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:24.524533   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:24.537770   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:24.537795   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:24.608594   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.608634   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:24.608650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.260845   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.260974   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:22.811939   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:24.812873   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:26.252688   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.751413   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.191588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:27.206339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:27.206421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:27.241277   66919 cri.go:89] found id: ""
	I0815 01:31:27.241306   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.241315   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:27.241321   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:27.241385   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:27.275952   66919 cri.go:89] found id: ""
	I0815 01:31:27.275983   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.275992   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:27.275998   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:27.276060   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:27.308320   66919 cri.go:89] found id: ""
	I0815 01:31:27.308348   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.308359   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:27.308366   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:27.308424   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:27.340957   66919 cri.go:89] found id: ""
	I0815 01:31:27.340987   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.340998   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:27.341007   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:27.341135   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:27.373078   66919 cri.go:89] found id: ""
	I0815 01:31:27.373102   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.373110   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:27.373117   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:27.373182   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:27.409250   66919 cri.go:89] found id: ""
	I0815 01:31:27.409277   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.409289   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:27.409296   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:27.409358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:27.444244   66919 cri.go:89] found id: ""
	I0815 01:31:27.444270   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.444280   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:27.444287   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:27.444360   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:27.482507   66919 cri.go:89] found id: ""
	I0815 01:31:27.482535   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.482543   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:27.482552   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:27.482570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:27.521896   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:27.521931   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:27.575404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:27.575437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:27.587713   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:27.587745   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:27.650431   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:27.650461   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:27.650475   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:25.761255   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.261210   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.312866   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:29.812673   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:30.752414   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:33.252178   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:30.228663   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:30.242782   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:30.242852   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:30.278385   66919 cri.go:89] found id: ""
	I0815 01:31:30.278410   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.278420   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:30.278428   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:30.278483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:30.316234   66919 cri.go:89] found id: ""
	I0815 01:31:30.316258   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.316268   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:30.316276   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:30.316335   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:30.348738   66919 cri.go:89] found id: ""
	I0815 01:31:30.348767   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.348778   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:30.348787   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:30.348851   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:30.380159   66919 cri.go:89] found id: ""
	I0815 01:31:30.380189   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.380201   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:30.380208   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:30.380261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:30.414888   66919 cri.go:89] found id: ""
	I0815 01:31:30.414911   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.414919   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:30.414924   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:30.414977   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:30.447898   66919 cri.go:89] found id: ""
	I0815 01:31:30.447923   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.447931   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:30.447937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:30.448024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:30.479148   66919 cri.go:89] found id: ""
	I0815 01:31:30.479177   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.479187   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:30.479193   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:30.479245   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:30.511725   66919 cri.go:89] found id: ""
	I0815 01:31:30.511752   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.511760   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:30.511768   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:30.511780   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:30.562554   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:30.562590   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:30.575869   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:30.575896   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:30.642642   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:30.642662   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:30.642675   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:30.734491   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:30.734530   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:33.276918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:33.289942   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:33.290010   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:33.322770   66919 cri.go:89] found id: ""
	I0815 01:31:33.322799   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.322806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:33.322813   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:33.322862   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:33.359474   66919 cri.go:89] found id: ""
	I0815 01:31:33.359503   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.359513   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:33.359520   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:33.359590   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:33.391968   66919 cri.go:89] found id: ""
	I0815 01:31:33.391996   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.392007   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:33.392014   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:33.392076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:33.423830   66919 cri.go:89] found id: ""
	I0815 01:31:33.423853   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.423861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:33.423866   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:33.423914   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:33.454991   66919 cri.go:89] found id: ""
	I0815 01:31:33.455014   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.455022   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:33.455027   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:33.455076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:33.492150   66919 cri.go:89] found id: ""
	I0815 01:31:33.492173   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.492181   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:33.492187   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:33.492236   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:33.525206   66919 cri.go:89] found id: ""
	I0815 01:31:33.525237   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.525248   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:33.525255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:33.525331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:33.558939   66919 cri.go:89] found id: ""
	I0815 01:31:33.558973   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.558984   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:33.558995   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:33.559011   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:33.616977   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:33.617029   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:33.629850   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:33.629879   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:33.698029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:33.698052   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:33.698069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:33.776609   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:33.776641   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:30.261492   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.761417   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.761672   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.315096   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.811837   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:35.751307   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:37.753280   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.320299   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:36.333429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:36.333492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:36.366810   66919 cri.go:89] found id: ""
	I0815 01:31:36.366846   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.366858   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:36.366866   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:36.366918   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:36.405898   66919 cri.go:89] found id: ""
	I0815 01:31:36.405930   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.405942   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:36.405949   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:36.406017   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:36.471396   66919 cri.go:89] found id: ""
	I0815 01:31:36.471432   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.471445   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:36.471453   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:36.471524   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:36.504319   66919 cri.go:89] found id: ""
	I0815 01:31:36.504355   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.504367   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:36.504373   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:36.504430   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:36.542395   66919 cri.go:89] found id: ""
	I0815 01:31:36.542423   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.542431   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:36.542437   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:36.542492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:36.576279   66919 cri.go:89] found id: ""
	I0815 01:31:36.576310   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.576320   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:36.576327   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:36.576391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:36.609215   66919 cri.go:89] found id: ""
	I0815 01:31:36.609243   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.609251   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:36.609256   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:36.609306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:36.641911   66919 cri.go:89] found id: ""
	I0815 01:31:36.641936   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.641944   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:36.641952   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:36.641964   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:36.691751   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:36.691784   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:36.704619   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:36.704644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:36.768328   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:36.768348   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:36.768360   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:36.843727   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:36.843759   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:39.381851   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:39.396205   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:39.396284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:39.430646   66919 cri.go:89] found id: ""
	I0815 01:31:39.430673   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.430681   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:39.430688   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:39.430751   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:39.468470   66919 cri.go:89] found id: ""
	I0815 01:31:39.468504   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.468517   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:39.468526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:39.468603   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:39.500377   66919 cri.go:89] found id: ""
	I0815 01:31:39.500407   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.500416   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:39.500423   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:39.500490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:39.532411   66919 cri.go:89] found id: ""
	I0815 01:31:39.532440   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.532447   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:39.532452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:39.532504   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:39.564437   66919 cri.go:89] found id: ""
	I0815 01:31:39.564463   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.564471   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:39.564476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:39.564528   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:39.598732   66919 cri.go:89] found id: ""
	I0815 01:31:39.598757   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.598765   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:39.598771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:39.598837   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:39.640429   66919 cri.go:89] found id: ""
	I0815 01:31:39.640457   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.640469   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:39.640476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:39.640536   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:39.672116   66919 cri.go:89] found id: ""
	I0815 01:31:39.672142   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.672151   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:39.672159   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:39.672171   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:39.721133   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:39.721170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:39.734024   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:39.734060   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:39.799465   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:39.799487   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:39.799501   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:39.880033   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:39.880068   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:37.263319   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:39.762708   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.812954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:39.312718   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:40.251411   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.252627   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.750964   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.421276   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:42.438699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:42.438760   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:42.473213   66919 cri.go:89] found id: ""
	I0815 01:31:42.473239   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.473246   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:42.473251   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:42.473311   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:42.509493   66919 cri.go:89] found id: ""
	I0815 01:31:42.509523   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.509533   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:42.509538   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:42.509594   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:42.543625   66919 cri.go:89] found id: ""
	I0815 01:31:42.543649   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.543659   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:42.543665   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:42.543731   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:42.581756   66919 cri.go:89] found id: ""
	I0815 01:31:42.581784   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.581794   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:42.581801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:42.581865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:42.615595   66919 cri.go:89] found id: ""
	I0815 01:31:42.615618   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.615626   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:42.615631   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:42.615689   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:42.652938   66919 cri.go:89] found id: ""
	I0815 01:31:42.652961   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.652973   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:42.652979   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:42.653026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:42.689362   66919 cri.go:89] found id: ""
	I0815 01:31:42.689391   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.689399   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:42.689406   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:42.689460   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:42.725880   66919 cri.go:89] found id: ""
	I0815 01:31:42.725903   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.725911   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:42.725920   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:42.725932   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:42.798531   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:42.798553   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:42.798567   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:42.878583   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:42.878617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:42.916218   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:42.916245   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:42.968613   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:42.968650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:42.260936   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.262272   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:41.315219   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:43.812950   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:46.751554   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.752369   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.482622   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:45.494847   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:45.494917   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:45.526105   66919 cri.go:89] found id: ""
	I0815 01:31:45.526130   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.526139   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:45.526145   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:45.526195   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:45.558218   66919 cri.go:89] found id: ""
	I0815 01:31:45.558247   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.558258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:45.558265   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:45.558327   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:45.589922   66919 cri.go:89] found id: ""
	I0815 01:31:45.589950   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.589961   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:45.589969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:45.590037   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:45.622639   66919 cri.go:89] found id: ""
	I0815 01:31:45.622670   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.622685   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:45.622690   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:45.622740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:45.659274   66919 cri.go:89] found id: ""
	I0815 01:31:45.659301   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.659309   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:45.659314   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:45.659362   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:45.690768   66919 cri.go:89] found id: ""
	I0815 01:31:45.690795   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.690804   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:45.690810   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:45.690860   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:45.726862   66919 cri.go:89] found id: ""
	I0815 01:31:45.726885   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.726892   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:45.726898   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:45.726943   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:45.761115   66919 cri.go:89] found id: ""
	I0815 01:31:45.761142   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.761153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:45.761164   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:45.761179   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:45.774290   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:45.774335   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:45.843029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:45.843053   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:45.843069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:45.918993   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:45.919032   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:45.955647   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:45.955685   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.506376   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:48.518173   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:48.518234   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:48.550773   66919 cri.go:89] found id: ""
	I0815 01:31:48.550798   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.550806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:48.550812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:48.550865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:48.582398   66919 cri.go:89] found id: ""
	I0815 01:31:48.582431   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.582442   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:48.582449   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:48.582512   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:48.613989   66919 cri.go:89] found id: ""
	I0815 01:31:48.614023   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.614036   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:48.614045   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:48.614114   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:48.645269   66919 cri.go:89] found id: ""
	I0815 01:31:48.645306   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.645317   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:48.645326   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:48.645394   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:48.680588   66919 cri.go:89] found id: ""
	I0815 01:31:48.680615   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.680627   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:48.680636   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:48.680723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:48.719580   66919 cri.go:89] found id: ""
	I0815 01:31:48.719607   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.719615   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:48.719621   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:48.719684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:48.756573   66919 cri.go:89] found id: ""
	I0815 01:31:48.756597   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.756606   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:48.756613   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:48.756684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:48.793983   66919 cri.go:89] found id: ""
	I0815 01:31:48.794018   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.794029   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:48.794040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:48.794053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.847776   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:48.847811   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:48.870731   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:48.870762   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:48.960519   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:48.960548   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:48.960565   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:49.037502   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:49.037535   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:46.761461   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.761907   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.813203   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.313262   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:51.251455   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:53.252808   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:51.576022   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:51.589531   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:51.589595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:51.623964   66919 cri.go:89] found id: ""
	I0815 01:31:51.623991   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.624000   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:51.624008   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:51.624074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:51.657595   66919 cri.go:89] found id: ""
	I0815 01:31:51.657618   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.657626   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:51.657632   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:51.657681   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:51.692462   66919 cri.go:89] found id: ""
	I0815 01:31:51.692490   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.692501   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:51.692507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:51.692570   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:51.724210   66919 cri.go:89] found id: ""
	I0815 01:31:51.724249   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.724259   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:51.724267   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:51.724329   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:51.756450   66919 cri.go:89] found id: ""
	I0815 01:31:51.756476   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.756486   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:51.756493   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:51.756555   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:51.789082   66919 cri.go:89] found id: ""
	I0815 01:31:51.789114   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.789126   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:51.789133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:51.789183   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:51.822390   66919 cri.go:89] found id: ""
	I0815 01:31:51.822420   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.822431   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:51.822438   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:51.822491   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:51.855977   66919 cri.go:89] found id: ""
	I0815 01:31:51.856004   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.856014   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:51.856025   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:51.856040   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:51.904470   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:51.904500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:51.918437   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:51.918466   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:51.991742   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:51.991770   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:51.991785   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:52.065894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:52.065926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:54.602000   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:54.616388   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:54.616466   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:54.675750   66919 cri.go:89] found id: ""
	I0815 01:31:54.675779   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.675793   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:54.675802   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:54.675857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:54.710581   66919 cri.go:89] found id: ""
	I0815 01:31:54.710609   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.710620   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:54.710627   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:54.710691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:54.747267   66919 cri.go:89] found id: ""
	I0815 01:31:54.747304   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.747316   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:54.747325   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:54.747387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:54.784175   66919 cri.go:89] found id: ""
	I0815 01:31:54.784209   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.784221   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:54.784230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:54.784295   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:54.820360   66919 cri.go:89] found id: ""
	I0815 01:31:54.820395   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.820405   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:54.820412   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:54.820480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:54.853176   66919 cri.go:89] found id: ""
	I0815 01:31:54.853204   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.853214   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:54.853222   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:54.853281   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:54.886063   66919 cri.go:89] found id: ""
	I0815 01:31:54.886092   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.886105   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:54.886112   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:54.886171   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:54.919495   66919 cri.go:89] found id: ""
	I0815 01:31:54.919529   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.919540   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:54.919558   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:54.919574   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:54.973177   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:54.973213   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:54.986864   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:54.986899   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:55.052637   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:55.052685   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:55.052700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:51.260314   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:53.261883   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:50.812208   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:52.812356   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:54.812990   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:55.750709   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.751319   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.752400   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:55.133149   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:55.133180   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:57.672833   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:57.686035   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:57.686099   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:57.718612   66919 cri.go:89] found id: ""
	I0815 01:31:57.718641   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.718653   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:57.718661   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:57.718738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:57.752763   66919 cri.go:89] found id: ""
	I0815 01:31:57.752781   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.752788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:57.752793   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:57.752840   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:57.785667   66919 cri.go:89] found id: ""
	I0815 01:31:57.785697   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.785709   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:57.785716   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:57.785776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:57.818775   66919 cri.go:89] found id: ""
	I0815 01:31:57.818804   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.818813   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:57.818821   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:57.818881   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:57.853766   66919 cri.go:89] found id: ""
	I0815 01:31:57.853798   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.853809   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:57.853815   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:57.853880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:57.886354   66919 cri.go:89] found id: ""
	I0815 01:31:57.886379   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.886386   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:57.886392   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:57.886453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:57.920522   66919 cri.go:89] found id: ""
	I0815 01:31:57.920553   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.920576   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:57.920583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:57.920648   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:57.952487   66919 cri.go:89] found id: ""
	I0815 01:31:57.952511   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.952520   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:57.952528   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:57.952541   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:58.003026   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:58.003064   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:58.016516   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:58.016544   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:58.091434   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:58.091459   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:58.091500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:58.170038   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:58.170073   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:55.760430   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.760719   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.761206   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.313073   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.812268   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:02.252033   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:04.252260   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:00.709797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:00.724086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:00.724162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:00.756025   66919 cri.go:89] found id: ""
	I0815 01:32:00.756056   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.756066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:00.756073   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:00.756130   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:00.787831   66919 cri.go:89] found id: ""
	I0815 01:32:00.787858   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.787870   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:00.787880   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:00.787940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:00.821605   66919 cri.go:89] found id: ""
	I0815 01:32:00.821637   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.821644   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:00.821649   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:00.821697   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:00.852708   66919 cri.go:89] found id: ""
	I0815 01:32:00.852732   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.852739   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:00.852745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:00.852790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:00.885392   66919 cri.go:89] found id: ""
	I0815 01:32:00.885426   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.885437   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:00.885446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:00.885506   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:00.916715   66919 cri.go:89] found id: ""
	I0815 01:32:00.916751   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.916763   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:00.916771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:00.916890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:00.949028   66919 cri.go:89] found id: ""
	I0815 01:32:00.949058   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.949069   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:00.949076   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:00.949137   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:00.986364   66919 cri.go:89] found id: ""
	I0815 01:32:00.986399   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.986409   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:00.986419   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:00.986433   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.036475   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:01.036517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:01.049711   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:01.049746   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:01.117283   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:01.117310   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:01.117328   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:01.195453   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:01.195492   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:03.732372   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:03.745944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:03.746005   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:03.780527   66919 cri.go:89] found id: ""
	I0815 01:32:03.780566   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.780578   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:03.780586   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:03.780647   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:03.814147   66919 cri.go:89] found id: ""
	I0815 01:32:03.814170   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.814177   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:03.814184   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:03.814267   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:03.847375   66919 cri.go:89] found id: ""
	I0815 01:32:03.847409   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.847422   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:03.847429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:03.847497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:03.882859   66919 cri.go:89] found id: ""
	I0815 01:32:03.882887   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.882897   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:03.882904   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:03.882972   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:03.916490   66919 cri.go:89] found id: ""
	I0815 01:32:03.916520   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.916528   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:03.916544   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:03.916613   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:03.954789   66919 cri.go:89] found id: ""
	I0815 01:32:03.954819   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.954836   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:03.954844   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:03.954907   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:03.987723   66919 cri.go:89] found id: ""
	I0815 01:32:03.987748   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.987756   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:03.987761   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:03.987810   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:04.020948   66919 cri.go:89] found id: ""
	I0815 01:32:04.020974   66919 logs.go:276] 0 containers: []
	W0815 01:32:04.020981   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:04.020990   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:04.021008   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:04.033466   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:04.033489   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:04.097962   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:04.097989   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:04.098006   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:04.174672   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:04.174706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:04.216198   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:04.216228   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.761354   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:03.762268   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:02.313003   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:04.812280   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.751582   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:08.752387   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.768102   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:06.782370   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:06.782473   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:06.815958   66919 cri.go:89] found id: ""
	I0815 01:32:06.815983   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.815992   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:06.815999   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:06.816059   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:06.848701   66919 cri.go:89] found id: ""
	I0815 01:32:06.848735   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.848748   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:06.848756   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:06.848821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:06.879506   66919 cri.go:89] found id: ""
	I0815 01:32:06.879536   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.879544   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:06.879550   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:06.879607   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:06.915332   66919 cri.go:89] found id: ""
	I0815 01:32:06.915359   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.915371   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:06.915377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:06.915438   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:06.949424   66919 cri.go:89] found id: ""
	I0815 01:32:06.949454   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.949464   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:06.949471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:06.949518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:06.983713   66919 cri.go:89] found id: ""
	I0815 01:32:06.983739   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.983747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:06.983753   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:06.983816   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:07.016165   66919 cri.go:89] found id: ""
	I0815 01:32:07.016196   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.016207   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:07.016214   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:07.016271   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:07.048368   66919 cri.go:89] found id: ""
	I0815 01:32:07.048399   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.048410   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:07.048420   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:07.048435   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:07.100088   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:07.100128   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:07.113430   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:07.113459   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:07.178199   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:07.178223   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:07.178239   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:07.265089   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:07.265121   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:09.804733   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:09.819456   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:09.819530   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:09.850946   66919 cri.go:89] found id: ""
	I0815 01:32:09.850974   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.850981   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:09.850986   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:09.851043   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:09.888997   66919 cri.go:89] found id: ""
	I0815 01:32:09.889028   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.889039   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:09.889045   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:09.889105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:09.921455   66919 cri.go:89] found id: ""
	I0815 01:32:09.921490   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.921503   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:09.921511   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:09.921587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:09.957365   66919 cri.go:89] found id: ""
	I0815 01:32:09.957394   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.957410   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:09.957417   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:09.957477   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:09.988716   66919 cri.go:89] found id: ""
	I0815 01:32:09.988740   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.988753   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:09.988760   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:09.988823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:10.024121   66919 cri.go:89] found id: ""
	I0815 01:32:10.024148   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.024155   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:10.024160   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:10.024208   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:10.056210   66919 cri.go:89] found id: ""
	I0815 01:32:10.056237   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.056247   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:10.056253   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:10.056314   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:10.087519   66919 cri.go:89] found id: ""
	I0815 01:32:10.087551   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.087562   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:10.087574   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:10.087589   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:06.260821   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:08.760889   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.813185   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:09.312608   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:11.251168   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.252911   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:10.142406   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:10.142446   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:10.156134   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:10.156176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:10.230397   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:10.230419   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:10.230432   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:10.315187   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:10.315221   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:12.852055   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:12.864410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:12.864479   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:12.895777   66919 cri.go:89] found id: ""
	I0815 01:32:12.895811   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.895821   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:12.895831   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:12.895902   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:12.928135   66919 cri.go:89] found id: ""
	I0815 01:32:12.928161   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.928171   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:12.928178   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:12.928244   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:12.961837   66919 cri.go:89] found id: ""
	I0815 01:32:12.961867   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.961878   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:12.961885   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:12.961947   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:12.997899   66919 cri.go:89] found id: ""
	I0815 01:32:12.997928   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.997939   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:12.997946   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:12.998008   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:13.032686   66919 cri.go:89] found id: ""
	I0815 01:32:13.032716   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.032725   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:13.032730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:13.032783   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:13.064395   66919 cri.go:89] found id: ""
	I0815 01:32:13.064431   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.064444   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:13.064452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:13.064522   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:13.103618   66919 cri.go:89] found id: ""
	I0815 01:32:13.103646   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.103655   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:13.103661   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:13.103711   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:13.137650   66919 cri.go:89] found id: ""
	I0815 01:32:13.137684   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.137694   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:13.137702   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:13.137715   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:13.189803   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:13.189836   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:13.204059   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:13.204091   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:13.273702   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:13.273723   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:13.273735   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:13.358979   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:13.359037   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:11.260422   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.260760   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:11.812182   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.812777   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:15.752291   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:17.752500   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:15.899388   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:15.911944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:15.912013   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:15.946179   66919 cri.go:89] found id: ""
	I0815 01:32:15.946206   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.946215   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:15.946223   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:15.946284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:15.979700   66919 cri.go:89] found id: ""
	I0815 01:32:15.979725   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.979732   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:15.979738   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:15.979784   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:16.013003   66919 cri.go:89] found id: ""
	I0815 01:32:16.013033   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.013044   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:16.013056   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:16.013113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:16.044824   66919 cri.go:89] found id: ""
	I0815 01:32:16.044851   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.044861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:16.044868   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:16.044930   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:16.076193   66919 cri.go:89] found id: ""
	I0815 01:32:16.076219   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.076227   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:16.076232   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:16.076280   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:16.113747   66919 cri.go:89] found id: ""
	I0815 01:32:16.113775   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.113785   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:16.113795   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:16.113855   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:16.145504   66919 cri.go:89] found id: ""
	I0815 01:32:16.145547   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.145560   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:16.145568   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:16.145637   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:16.181581   66919 cri.go:89] found id: ""
	I0815 01:32:16.181613   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.181623   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:16.181634   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:16.181655   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:16.223644   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:16.223687   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:16.279096   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:16.279131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:16.292132   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:16.292161   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:16.360605   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:16.360624   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:16.360636   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:18.938884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:18.951884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:18.951966   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:18.989163   66919 cri.go:89] found id: ""
	I0815 01:32:18.989192   66919 logs.go:276] 0 containers: []
	W0815 01:32:18.989201   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:18.989206   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:18.989256   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:19.025915   66919 cri.go:89] found id: ""
	I0815 01:32:19.025943   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.025952   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:19.025960   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:19.026028   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:19.062863   66919 cri.go:89] found id: ""
	I0815 01:32:19.062889   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.062899   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:19.062907   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:19.062969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:19.099336   66919 cri.go:89] found id: ""
	I0815 01:32:19.099358   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.099369   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:19.099383   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:19.099442   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:19.130944   66919 cri.go:89] found id: ""
	I0815 01:32:19.130977   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.130988   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:19.130995   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:19.131056   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:19.161353   66919 cri.go:89] found id: ""
	I0815 01:32:19.161381   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.161391   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:19.161398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:19.161454   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:19.195867   66919 cri.go:89] found id: ""
	I0815 01:32:19.195902   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.195915   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:19.195923   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:19.195993   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:19.228851   66919 cri.go:89] found id: ""
	I0815 01:32:19.228886   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.228899   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:19.228919   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:19.228938   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:19.281284   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:19.281320   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:19.294742   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:19.294771   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:19.364684   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:19.364708   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:19.364722   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:19.451057   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:19.451092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:15.261508   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:17.261956   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:19.760608   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:16.312855   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:18.811382   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:20.251898   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:22.252179   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:24.252312   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:21.989302   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:22.002691   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:22.002755   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:22.037079   66919 cri.go:89] found id: ""
	I0815 01:32:22.037101   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.037109   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:22.037115   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:22.037162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:22.069804   66919 cri.go:89] found id: ""
	I0815 01:32:22.069833   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.069842   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:22.069848   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:22.069919   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.102474   66919 cri.go:89] found id: ""
	I0815 01:32:22.102503   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.102515   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:22.102523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:22.102587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:22.137416   66919 cri.go:89] found id: ""
	I0815 01:32:22.137442   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.137449   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:22.137454   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:22.137511   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:22.171153   66919 cri.go:89] found id: ""
	I0815 01:32:22.171182   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.171191   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:22.171198   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:22.171259   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:22.207991   66919 cri.go:89] found id: ""
	I0815 01:32:22.208020   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.208029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:22.208038   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:22.208111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:22.245727   66919 cri.go:89] found id: ""
	I0815 01:32:22.245757   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.245767   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:22.245774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:22.245838   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:22.284478   66919 cri.go:89] found id: ""
	I0815 01:32:22.284502   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.284510   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:22.284518   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:22.284529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:22.297334   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:22.297378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:22.369318   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:22.369342   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:22.369356   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:22.445189   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:22.445226   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:22.486563   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:22.486592   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.037875   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:25.051503   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:25.051580   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:25.090579   66919 cri.go:89] found id: ""
	I0815 01:32:25.090610   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.090622   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:25.090629   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:25.090691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:25.123683   66919 cri.go:89] found id: ""
	I0815 01:32:25.123711   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.123722   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:25.123729   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:25.123790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.261478   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:24.760607   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:20.812971   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:23.311523   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:25.313928   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:26.752024   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.252947   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:25.155715   66919 cri.go:89] found id: ""
	I0815 01:32:25.155744   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.155752   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:25.155757   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:25.155806   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:25.186654   66919 cri.go:89] found id: ""
	I0815 01:32:25.186680   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.186688   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:25.186694   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:25.186741   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:25.218636   66919 cri.go:89] found id: ""
	I0815 01:32:25.218665   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.218674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:25.218679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:25.218729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:25.250018   66919 cri.go:89] found id: ""
	I0815 01:32:25.250046   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.250116   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:25.250147   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:25.250219   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:25.283374   66919 cri.go:89] found id: ""
	I0815 01:32:25.283403   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.283413   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:25.283420   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:25.283483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:25.315240   66919 cri.go:89] found id: ""
	I0815 01:32:25.315260   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.315267   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:25.315274   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:25.315286   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.367212   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:25.367243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:25.380506   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:25.380531   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:25.441106   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:25.441129   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:25.441145   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:25.522791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:25.522828   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.061984   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:28.075091   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:28.075149   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:28.110375   66919 cri.go:89] found id: ""
	I0815 01:32:28.110407   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.110419   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:28.110426   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:28.110490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:28.146220   66919 cri.go:89] found id: ""
	I0815 01:32:28.146249   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.146258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:28.146264   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:28.146317   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:28.177659   66919 cri.go:89] found id: ""
	I0815 01:32:28.177691   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.177702   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:28.177708   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:28.177776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:28.209729   66919 cri.go:89] found id: ""
	I0815 01:32:28.209759   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.209768   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:28.209775   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:28.209835   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:28.241605   66919 cri.go:89] found id: ""
	I0815 01:32:28.241633   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.241642   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:28.241646   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:28.241706   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:28.276697   66919 cri.go:89] found id: ""
	I0815 01:32:28.276722   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.276730   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:28.276735   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:28.276785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:28.309109   66919 cri.go:89] found id: ""
	I0815 01:32:28.309134   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.309144   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:28.309151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:28.309213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:28.348262   66919 cri.go:89] found id: ""
	I0815 01:32:28.348289   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.348303   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:28.348315   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:28.348329   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.387270   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:28.387296   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:28.440454   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:28.440504   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:28.453203   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:28.453233   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:28.523080   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:28.523106   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:28.523123   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:26.761742   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.261323   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:27.812457   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.812954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:31.253078   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:33.755301   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:31.098144   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:31.111396   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:31.111469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:31.143940   66919 cri.go:89] found id: ""
	I0815 01:32:31.143969   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.143977   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:31.143983   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:31.144038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:31.175393   66919 cri.go:89] found id: ""
	I0815 01:32:31.175421   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.175439   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:31.175447   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:31.175509   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:31.213955   66919 cri.go:89] found id: ""
	I0815 01:32:31.213984   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.213993   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:31.213998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:31.214047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:31.245836   66919 cri.go:89] found id: ""
	I0815 01:32:31.245861   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.245868   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:31.245873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:31.245936   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:31.279290   66919 cri.go:89] found id: ""
	I0815 01:32:31.279317   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.279327   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:31.279334   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:31.279408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:31.313898   66919 cri.go:89] found id: ""
	I0815 01:32:31.313926   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.313937   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:31.313944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:31.314020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:31.344466   66919 cri.go:89] found id: ""
	I0815 01:32:31.344502   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.344513   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:31.344521   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:31.344586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:31.375680   66919 cri.go:89] found id: ""
	I0815 01:32:31.375709   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.375721   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:31.375732   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:31.375747   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:31.457005   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:31.457048   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.494656   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:31.494691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:31.546059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:31.546096   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:31.559523   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:31.559553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:31.628402   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.128980   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:34.142151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:34.142216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:34.189425   66919 cri.go:89] found id: ""
	I0815 01:32:34.189453   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.189464   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:34.189470   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:34.189533   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:34.222360   66919 cri.go:89] found id: ""
	I0815 01:32:34.222385   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.222392   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:34.222398   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:34.222453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:34.256275   66919 cri.go:89] found id: ""
	I0815 01:32:34.256302   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.256314   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:34.256322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:34.256387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:34.294104   66919 cri.go:89] found id: ""
	I0815 01:32:34.294130   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.294137   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:34.294143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:34.294214   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:34.330163   66919 cri.go:89] found id: ""
	I0815 01:32:34.330193   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.330205   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:34.330213   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:34.330278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:34.363436   66919 cri.go:89] found id: ""
	I0815 01:32:34.363464   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.363475   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:34.363483   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:34.363540   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:34.399733   66919 cri.go:89] found id: ""
	I0815 01:32:34.399761   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.399772   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:34.399779   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:34.399832   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:34.433574   66919 cri.go:89] found id: ""
	I0815 01:32:34.433781   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.433804   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:34.433820   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:34.433839   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:34.488449   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:34.488496   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:34.502743   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:34.502776   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:34.565666   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.565701   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:34.565718   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:34.639463   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:34.639498   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.262299   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:33.760758   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:32.313372   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:34.812259   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:36.251156   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:38.252330   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:37.189617   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:37.202695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:37.202766   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:37.235556   66919 cri.go:89] found id: ""
	I0815 01:32:37.235589   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.235600   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:37.235608   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:37.235669   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:37.271110   66919 cri.go:89] found id: ""
	I0815 01:32:37.271139   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.271150   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:37.271158   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:37.271216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:37.304294   66919 cri.go:89] found id: ""
	I0815 01:32:37.304325   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.304332   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:37.304337   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:37.304398   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:37.337271   66919 cri.go:89] found id: ""
	I0815 01:32:37.337297   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.337309   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:37.337317   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:37.337377   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:37.373088   66919 cri.go:89] found id: ""
	I0815 01:32:37.373115   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.373126   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:37.373133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:37.373184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:37.407978   66919 cri.go:89] found id: ""
	I0815 01:32:37.408003   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.408011   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:37.408016   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:37.408065   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:37.441966   66919 cri.go:89] found id: ""
	I0815 01:32:37.441999   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.442009   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:37.442017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:37.442079   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:37.473670   66919 cri.go:89] found id: ""
	I0815 01:32:37.473699   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.473710   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:37.473720   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:37.473740   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:37.509174   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:37.509208   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:37.560059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:37.560099   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:37.574425   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:37.574458   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:37.639177   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:37.639199   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:37.639216   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:36.260796   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:38.261082   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:36.813759   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:39.312862   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:40.752526   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:43.251946   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:40.218504   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:40.231523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:40.231626   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:40.266065   66919 cri.go:89] found id: ""
	I0815 01:32:40.266092   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.266102   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:40.266109   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:40.266174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:40.298717   66919 cri.go:89] found id: ""
	I0815 01:32:40.298749   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.298759   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:40.298767   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:40.298821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:40.330633   66919 cri.go:89] found id: ""
	I0815 01:32:40.330660   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.330668   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:40.330674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:40.330738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:40.367840   66919 cri.go:89] found id: ""
	I0815 01:32:40.367866   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.367876   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:40.367884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:40.367953   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:40.403883   66919 cri.go:89] found id: ""
	I0815 01:32:40.403910   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.403921   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:40.403927   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:40.404001   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:40.433989   66919 cri.go:89] found id: ""
	I0815 01:32:40.434016   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.434029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:40.434036   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:40.434098   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:40.468173   66919 cri.go:89] found id: ""
	I0815 01:32:40.468202   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.468213   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:40.468220   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:40.468278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:40.502701   66919 cri.go:89] found id: ""
	I0815 01:32:40.502726   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.502737   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:40.502748   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:40.502772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:40.582716   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:40.582751   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:40.582766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:40.663875   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:40.663910   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.710394   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:40.710439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:40.763015   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:40.763044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.276542   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:43.289311   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:43.289375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:43.334368   66919 cri.go:89] found id: ""
	I0815 01:32:43.334398   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.334408   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:43.334416   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:43.334480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:43.367778   66919 cri.go:89] found id: ""
	I0815 01:32:43.367810   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.367821   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:43.367829   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:43.367890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:43.408036   66919 cri.go:89] found id: ""
	I0815 01:32:43.408060   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.408067   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:43.408072   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:43.408126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:43.442240   66919 cri.go:89] found id: ""
	I0815 01:32:43.442264   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.442276   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:43.442282   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:43.442366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:43.475071   66919 cri.go:89] found id: ""
	I0815 01:32:43.475103   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.475113   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:43.475123   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:43.475189   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:43.508497   66919 cri.go:89] found id: ""
	I0815 01:32:43.508526   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.508536   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:43.508543   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:43.508601   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:43.544292   66919 cri.go:89] found id: ""
	I0815 01:32:43.544315   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.544322   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:43.544328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:43.544390   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:43.582516   66919 cri.go:89] found id: ""
	I0815 01:32:43.582544   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.582556   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:43.582567   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:43.582583   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:43.633821   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:43.633853   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.647453   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:43.647478   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:43.715818   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:43.715839   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:43.715850   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:43.798131   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:43.798167   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.262028   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:42.262223   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:44.760964   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:41.813262   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:43.813491   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:45.751794   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:47.751852   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:49.752186   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:46.337867   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:46.364553   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:46.364629   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:46.426611   66919 cri.go:89] found id: ""
	I0815 01:32:46.426642   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.426654   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:46.426662   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:46.426724   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:46.461160   66919 cri.go:89] found id: ""
	I0815 01:32:46.461194   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.461201   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:46.461206   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:46.461262   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:46.492542   66919 cri.go:89] found id: ""
	I0815 01:32:46.492566   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.492576   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:46.492583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:46.492643   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:46.526035   66919 cri.go:89] found id: ""
	I0815 01:32:46.526060   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.526068   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:46.526075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:46.526131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:46.558867   66919 cri.go:89] found id: ""
	I0815 01:32:46.558895   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.558903   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:46.558909   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:46.558969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:46.593215   66919 cri.go:89] found id: ""
	I0815 01:32:46.593243   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.593258   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:46.593264   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:46.593345   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:46.626683   66919 cri.go:89] found id: ""
	I0815 01:32:46.626710   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.626720   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:46.626727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:46.626786   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:46.660687   66919 cri.go:89] found id: ""
	I0815 01:32:46.660716   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.660727   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:46.660738   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:46.660754   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:46.710639   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:46.710670   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:46.723378   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:46.723402   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:46.790906   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:46.790931   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:46.790946   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:46.876843   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:46.876877   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:49.421563   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:49.434606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:49.434688   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:49.468855   66919 cri.go:89] found id: ""
	I0815 01:32:49.468884   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.468895   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:49.468900   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:49.468958   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:49.507477   66919 cri.go:89] found id: ""
	I0815 01:32:49.507507   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.507519   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:49.507526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:49.507586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:49.539825   66919 cri.go:89] found id: ""
	I0815 01:32:49.539855   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.539866   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:49.539873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:49.539925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:49.570812   66919 cri.go:89] found id: ""
	I0815 01:32:49.570841   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.570851   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:49.570858   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:49.570910   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:49.604327   66919 cri.go:89] found id: ""
	I0815 01:32:49.604356   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.604367   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:49.604374   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:49.604456   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:49.640997   66919 cri.go:89] found id: ""
	I0815 01:32:49.641029   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.641042   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:49.641051   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:49.641116   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:49.673274   66919 cri.go:89] found id: ""
	I0815 01:32:49.673303   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.673314   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:49.673322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:49.673381   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:49.708863   66919 cri.go:89] found id: ""
	I0815 01:32:49.708890   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.708897   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:49.708905   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:49.708916   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:49.759404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:49.759431   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:49.773401   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:49.773429   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:49.842512   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:49.842539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:49.842557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:49.923996   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:49.924030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:46.760999   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:48.762058   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:46.312409   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:48.313081   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:51.752324   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:53.752358   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:52.459672   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:52.472149   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:52.472218   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:52.508168   66919 cri.go:89] found id: ""
	I0815 01:32:52.508193   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.508202   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:52.508207   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:52.508260   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:52.543741   66919 cri.go:89] found id: ""
	I0815 01:32:52.543770   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.543788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:52.543796   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:52.543850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:52.575833   66919 cri.go:89] found id: ""
	I0815 01:32:52.575865   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.575876   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:52.575883   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:52.575950   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:52.607593   66919 cri.go:89] found id: ""
	I0815 01:32:52.607627   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.607638   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:52.607645   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:52.607705   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:52.641726   66919 cri.go:89] found id: ""
	I0815 01:32:52.641748   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.641757   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:52.641763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:52.641820   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:52.673891   66919 cri.go:89] found id: ""
	I0815 01:32:52.673918   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.673926   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:52.673932   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:52.673989   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:52.705405   66919 cri.go:89] found id: ""
	I0815 01:32:52.705465   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.705479   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:52.705488   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:52.705683   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:52.739413   66919 cri.go:89] found id: ""
	I0815 01:32:52.739442   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.739455   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:52.739466   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:52.739481   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:52.791891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:52.791926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:52.806154   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:52.806184   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:52.871807   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:52.871833   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:52.871848   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:52.955257   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:52.955299   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:51.261339   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:53.760453   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:50.811954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:52.814155   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.315451   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.753146   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:58.251418   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.498326   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:55.511596   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:55.511674   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:55.545372   66919 cri.go:89] found id: ""
	I0815 01:32:55.545397   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.545405   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:55.545410   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:55.545469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:55.578661   66919 cri.go:89] found id: ""
	I0815 01:32:55.578687   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.578699   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:55.578706   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:55.578774   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:55.612071   66919 cri.go:89] found id: ""
	I0815 01:32:55.612096   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.612104   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:55.612109   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:55.612167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:55.647842   66919 cri.go:89] found id: ""
	I0815 01:32:55.647870   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.647879   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:55.647884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:55.647946   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:55.683145   66919 cri.go:89] found id: ""
	I0815 01:32:55.683171   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.683179   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:55.683185   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:55.683237   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:55.716485   66919 cri.go:89] found id: ""
	I0815 01:32:55.716513   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.716524   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:55.716529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:55.716588   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:55.751649   66919 cri.go:89] found id: ""
	I0815 01:32:55.751673   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.751681   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:55.751689   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:55.751748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:55.786292   66919 cri.go:89] found id: ""
	I0815 01:32:55.786322   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.786333   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:55.786345   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:55.786362   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:55.837633   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:55.837680   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:55.851624   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:55.851697   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:55.920496   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:55.920518   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:55.920532   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:55.998663   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:55.998700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:58.538202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:58.550630   66919 kubeadm.go:597] duration metric: took 4m4.454171061s to restartPrimaryControlPlane
	W0815 01:32:58.550719   66919 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:32:58.550763   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:32:55.760913   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:57.761301   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:57.812542   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:59.812797   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:00.251492   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.751937   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.968200   66919 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.417406165s)
	I0815 01:33:02.968273   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:02.984328   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:02.994147   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:03.003703   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:03.003745   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:03.003799   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:03.012560   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:03.012629   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:03.021480   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:03.030121   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:03.030185   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:03.039216   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.047790   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:03.047854   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.056508   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:03.065001   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:03.065059   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:03.073818   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:03.286102   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:33:00.260884   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.261081   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:04.261431   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.312430   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:04.811970   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:05.252564   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:07.751944   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:09.752232   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:06.262039   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:08.760900   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:06.812188   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:08.812782   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.752403   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:14.251873   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.261490   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:13.760541   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.312341   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:13.313036   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:16.252242   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:18.252528   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:15.761353   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:18.261298   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:15.812234   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:17.812936   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.312284   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.752195   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:23.253836   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.262317   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:22.760573   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:24.760639   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:22.812596   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:25.313723   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:25.751279   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.751900   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.260523   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:29.261069   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.314902   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:29.812210   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:30.306422   67000 pod_ready.go:81] duration metric: took 4m0.000133706s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" ...
	E0815 01:33:30.306452   67000 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 01:33:30.306487   67000 pod_ready.go:38] duration metric: took 4m9.54037853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:33:30.306516   67000 kubeadm.go:597] duration metric: took 4m18.620065579s to restartPrimaryControlPlane
	W0815 01:33:30.306585   67000 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:33:30.306616   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:33:30.251274   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:32.251733   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:34.261342   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:31.261851   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:33.760731   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:36.752156   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:39.251042   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:35.761425   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:38.260168   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:41.252730   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:43.751914   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:40.260565   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:42.261544   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:44.263225   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:45.752581   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:48.251003   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:46.760884   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:49.259955   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:50.251655   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:52.751031   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:52.751064   67451 pod_ready.go:81] duration metric: took 4m0.00559932s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	E0815 01:33:52.751076   67451 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 01:33:52.751088   67451 pod_ready.go:38] duration metric: took 4m2.403367614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:33:52.751108   67451 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:33:52.751143   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:33:52.751205   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:33:52.795646   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:52.795671   67451 cri.go:89] found id: ""
	I0815 01:33:52.795680   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:33:52.795738   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.800301   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:33:52.800378   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:33:52.832704   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:52.832723   67451 cri.go:89] found id: ""
	I0815 01:33:52.832731   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:33:52.832789   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.836586   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:33:52.836647   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:33:52.871782   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:52.871806   67451 cri.go:89] found id: ""
	I0815 01:33:52.871814   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:33:52.871865   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.875939   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:33:52.876003   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:33:52.911531   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:52.911559   67451 cri.go:89] found id: ""
	I0815 01:33:52.911568   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:33:52.911618   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.915944   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:33:52.916044   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:33:52.950344   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:52.950370   67451 cri.go:89] found id: ""
	I0815 01:33:52.950379   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:33:52.950429   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.954361   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:33:52.954423   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:33:52.988534   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:52.988560   67451 cri.go:89] found id: ""
	I0815 01:33:52.988568   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:33:52.988614   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.992310   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:33:52.992362   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:33:53.024437   67451 cri.go:89] found id: ""
	I0815 01:33:53.024464   67451 logs.go:276] 0 containers: []
	W0815 01:33:53.024472   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:33:53.024477   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:33:53.024540   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:33:53.065265   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:53.065294   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:53.065300   67451 cri.go:89] found id: ""
	I0815 01:33:53.065309   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:33:53.065371   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:53.069355   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:53.073218   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:33:53.073241   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:53.111718   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:33:53.111748   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:53.168887   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:33:53.168916   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:53.205011   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:33:53.205047   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:53.236754   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:33:53.236783   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:33:53.717444   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:33:53.717479   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:33:53.730786   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:33:53.730822   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:53.772883   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:33:53.772915   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:53.811011   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:33:53.811045   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:33:53.850482   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:33:53.850537   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:53.884061   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:33:53.884094   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:33:53.953586   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:33:53.953621   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:33:54.074305   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:33:54.074345   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:51.261543   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:53.761698   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:56.568636   67000 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.261991635s)
	I0815 01:33:56.568725   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:56.585102   67000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:56.595265   67000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:56.606275   67000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:56.606302   67000 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:56.606346   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:56.614847   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:56.614909   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:56.624087   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:56.635940   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:56.635996   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:56.648778   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:56.659984   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:56.660048   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:56.670561   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:56.680716   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:56.680770   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:56.691582   67000 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:56.744053   67000 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:33:56.744448   67000 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:33:56.859803   67000 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:33:56.859986   67000 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:33:56.860126   67000 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:33:56.870201   67000 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:33:56.872775   67000 out.go:204]   - Generating certificates and keys ...
	I0815 01:33:56.872875   67000 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:33:56.872957   67000 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:33:56.873055   67000 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:33:56.873134   67000 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:33:56.873222   67000 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:33:56.873302   67000 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:33:56.873391   67000 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:33:56.873474   67000 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:33:56.873577   67000 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:33:56.873686   67000 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:33:56.873745   67000 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:33:56.873823   67000 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:33:56.993607   67000 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:33:57.204419   67000 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:33:57.427518   67000 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:33:57.816802   67000 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:33:57.976885   67000 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:33:57.977545   67000 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:33:57.980898   67000 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:33:56.622543   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:33:56.645990   67451 api_server.go:72] duration metric: took 4m13.53998694s to wait for apiserver process to appear ...
	I0815 01:33:56.646016   67451 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:33:56.646059   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:33:56.646118   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:33:56.690122   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:56.690169   67451 cri.go:89] found id: ""
	I0815 01:33:56.690180   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:33:56.690253   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.694647   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:33:56.694702   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:33:56.732231   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:56.732269   67451 cri.go:89] found id: ""
	I0815 01:33:56.732279   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:33:56.732341   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.736567   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:33:56.736642   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:33:56.776792   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:56.776816   67451 cri.go:89] found id: ""
	I0815 01:33:56.776827   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:33:56.776886   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.781131   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:33:56.781200   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:33:56.814488   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:56.814514   67451 cri.go:89] found id: ""
	I0815 01:33:56.814524   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:33:56.814598   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.818456   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:33:56.818518   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:33:56.872968   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:56.872988   67451 cri.go:89] found id: ""
	I0815 01:33:56.872998   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:33:56.873059   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.877393   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:33:56.877459   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:33:56.918072   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:56.918169   67451 cri.go:89] found id: ""
	I0815 01:33:56.918185   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:33:56.918247   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.923442   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:33:56.923508   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:33:56.960237   67451 cri.go:89] found id: ""
	I0815 01:33:56.960263   67451 logs.go:276] 0 containers: []
	W0815 01:33:56.960271   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:33:56.960276   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:33:56.960339   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:33:56.995156   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:56.995184   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:56.995189   67451 cri.go:89] found id: ""
	I0815 01:33:56.995195   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:33:56.995253   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.999496   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:57.004450   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:33:57.004478   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:33:57.082294   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:33:57.082336   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:33:57.098629   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:33:57.098662   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:57.132282   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:33:57.132314   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:57.166448   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:33:57.166482   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:57.198997   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:33:57.199027   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:57.232713   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:33:57.232746   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:33:57.684565   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:33:57.684601   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:33:57.736700   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:33:57.736734   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:33:57.847294   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:33:57.847320   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:57.896696   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:33:57.896725   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:57.940766   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:33:57.940799   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:57.979561   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:33:57.979586   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:56.260814   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:58.760911   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:57.982527   67000 out.go:204]   - Booting up control plane ...
	I0815 01:33:57.982632   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:33:57.982740   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:33:57.982828   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:33:58.009596   67000 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:33:58.019089   67000 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:33:58.019165   67000 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:33:58.152279   67000 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:33:58.152459   67000 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:33:58.652446   67000 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.333422ms
	I0815 01:33:58.652548   67000 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:34:03.655057   67000 kubeadm.go:310] [api-check] The API server is healthy after 5.002436765s
	I0815 01:34:03.667810   67000 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:34:03.684859   67000 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:34:03.711213   67000 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:34:03.711523   67000 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-190398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:34:03.722147   67000 kubeadm.go:310] [bootstrap-token] Using token: rpl4uv.hjs6pd4939cxws48
	I0815 01:34:00.548574   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:34:00.554825   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 200:
	ok
	I0815 01:34:00.556191   67451 api_server.go:141] control plane version: v1.31.0
	I0815 01:34:00.556215   67451 api_server.go:131] duration metric: took 3.910191173s to wait for apiserver health ...
	I0815 01:34:00.556225   67451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:34:00.556253   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:34:00.556316   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:34:00.603377   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:34:00.603404   67451 cri.go:89] found id: ""
	I0815 01:34:00.603413   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:34:00.603471   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.608674   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:34:00.608747   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:34:00.660318   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:34:00.660346   67451 cri.go:89] found id: ""
	I0815 01:34:00.660355   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:34:00.660450   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.664411   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:34:00.664483   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:34:00.710148   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:34:00.710178   67451 cri.go:89] found id: ""
	I0815 01:34:00.710188   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:34:00.710255   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.714877   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:34:00.714936   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:34:00.750324   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:34:00.750352   67451 cri.go:89] found id: ""
	I0815 01:34:00.750361   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:34:00.750423   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.754304   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:34:00.754377   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:34:00.797956   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:34:00.797980   67451 cri.go:89] found id: ""
	I0815 01:34:00.797989   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:34:00.798053   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.802260   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:34:00.802362   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:34:00.841502   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:34:00.841529   67451 cri.go:89] found id: ""
	I0815 01:34:00.841539   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:34:00.841599   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.845398   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:34:00.845454   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:34:00.882732   67451 cri.go:89] found id: ""
	I0815 01:34:00.882769   67451 logs.go:276] 0 containers: []
	W0815 01:34:00.882779   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:34:00.882786   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:34:00.882855   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:34:00.924913   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:34:00.924942   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:34:00.924948   67451 cri.go:89] found id: ""
	I0815 01:34:00.924958   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:34:00.925019   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.929047   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.932838   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:34:00.932862   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:34:00.975515   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:34:00.975544   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:34:01.041578   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:34:01.041616   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:34:01.083548   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:34:01.083584   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:34:01.181982   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:34:01.182028   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:34:01.197180   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:34:01.197222   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:34:01.296173   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:34:01.296215   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:34:01.348591   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:34:01.348621   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:34:01.385258   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:34:01.385290   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:34:01.760172   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:34:01.760228   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:34:01.811334   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:34:01.811371   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:34:01.855563   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:34:01.855602   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:34:01.891834   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:34:01.891871   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:34:04.440542   67451 system_pods.go:59] 8 kube-system pods found
	I0815 01:34:04.440582   67451 system_pods.go:61] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running
	I0815 01:34:04.440590   67451 system_pods.go:61] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running
	I0815 01:34:04.440596   67451 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running
	I0815 01:34:04.440602   67451 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running
	I0815 01:34:04.440607   67451 system_pods.go:61] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:34:04.440612   67451 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running
	I0815 01:34:04.440622   67451 system_pods.go:61] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:04.440627   67451 system_pods.go:61] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:34:04.440636   67451 system_pods.go:74] duration metric: took 3.884405315s to wait for pod list to return data ...
	I0815 01:34:04.440643   67451 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:34:04.443705   67451 default_sa.go:45] found service account: "default"
	I0815 01:34:04.443728   67451 default_sa.go:55] duration metric: took 3.078997ms for default service account to be created ...
	I0815 01:34:04.443736   67451 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:34:04.451338   67451 system_pods.go:86] 8 kube-system pods found
	I0815 01:34:04.451370   67451 system_pods.go:89] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running
	I0815 01:34:04.451379   67451 system_pods.go:89] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running
	I0815 01:34:04.451386   67451 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running
	I0815 01:34:04.451394   67451 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running
	I0815 01:34:04.451401   67451 system_pods.go:89] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:34:04.451408   67451 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running
	I0815 01:34:04.451419   67451 system_pods.go:89] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:04.451430   67451 system_pods.go:89] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:34:04.451443   67451 system_pods.go:126] duration metric: took 7.701241ms to wait for k8s-apps to be running ...
	I0815 01:34:04.451455   67451 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:34:04.451507   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:04.468766   67451 system_svc.go:56] duration metric: took 17.300221ms WaitForService to wait for kubelet
	I0815 01:34:04.468801   67451 kubeadm.go:582] duration metric: took 4m21.362801315s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:34:04.468832   67451 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:34:04.472507   67451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:34:04.472531   67451 node_conditions.go:123] node cpu capacity is 2
	I0815 01:34:04.472542   67451 node_conditions.go:105] duration metric: took 3.704147ms to run NodePressure ...
	I0815 01:34:04.472565   67451 start.go:241] waiting for startup goroutines ...
	I0815 01:34:04.472575   67451 start.go:246] waiting for cluster config update ...
	I0815 01:34:04.472588   67451 start.go:255] writing updated cluster config ...
	I0815 01:34:04.472865   67451 ssh_runner.go:195] Run: rm -f paused
	I0815 01:34:04.527726   67451 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:34:04.529173   67451 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-018537" cluster and "default" namespace by default
	I0815 01:34:03.723380   67000 out.go:204]   - Configuring RBAC rules ...
	I0815 01:34:03.723547   67000 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:34:03.729240   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:34:03.737279   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:34:03.740490   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:34:03.747717   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:34:03.751107   67000 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:34:04.063063   67000 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:34:04.490218   67000 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:34:05.062068   67000 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:34:05.065926   67000 kubeadm.go:310] 
	I0815 01:34:05.065991   67000 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:34:05.066017   67000 kubeadm.go:310] 
	I0815 01:34:05.066103   67000 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:34:05.066114   67000 kubeadm.go:310] 
	I0815 01:34:05.066148   67000 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:34:05.066211   67000 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:34:05.066286   67000 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:34:05.066298   67000 kubeadm.go:310] 
	I0815 01:34:05.066368   67000 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:34:05.066377   67000 kubeadm.go:310] 
	I0815 01:34:05.066416   67000 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:34:05.066423   67000 kubeadm.go:310] 
	I0815 01:34:05.066499   67000 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:34:05.066602   67000 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:34:05.066692   67000 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:34:05.066699   67000 kubeadm.go:310] 
	I0815 01:34:05.066766   67000 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:34:05.066829   67000 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:34:05.066835   67000 kubeadm.go:310] 
	I0815 01:34:05.066958   67000 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rpl4uv.hjs6pd4939cxws48 \
	I0815 01:34:05.067094   67000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:34:05.067122   67000 kubeadm.go:310] 	--control-plane 
	I0815 01:34:05.067130   67000 kubeadm.go:310] 
	I0815 01:34:05.067246   67000 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:34:05.067257   67000 kubeadm.go:310] 
	I0815 01:34:05.067360   67000 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rpl4uv.hjs6pd4939cxws48 \
	I0815 01:34:05.067496   67000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:34:05.068747   67000 kubeadm.go:310] W0815 01:33:56.716635    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:05.069045   67000 kubeadm.go:310] W0815 01:33:56.717863    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:05.069191   67000 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:05.069220   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:34:05.069231   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:34:05.070969   67000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:34:00.761976   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:03.263360   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:05.072063   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:34:05.081962   67000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:34:05.106105   67000 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:34:05.106173   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:05.106224   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-190398 minikube.k8s.io/updated_at=2024_08_15T01_34_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=embed-certs-190398 minikube.k8s.io/primary=true
	I0815 01:34:05.282543   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:05.282564   67000 ops.go:34] apiserver oom_adj: -16
	I0815 01:34:05.783320   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:06.282990   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:06.782692   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:07.283083   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:07.783174   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:08.283580   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:08.783293   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:09.282718   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:09.384394   67000 kubeadm.go:1113] duration metric: took 4.278268585s to wait for elevateKubeSystemPrivileges
	I0815 01:34:09.384433   67000 kubeadm.go:394] duration metric: took 4m57.749730888s to StartCluster
	I0815 01:34:09.384454   67000 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:09.384550   67000 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:34:09.386694   67000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:09.386961   67000 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:34:09.387019   67000 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:34:09.387099   67000 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-190398"
	I0815 01:34:09.387109   67000 addons.go:69] Setting default-storageclass=true in profile "embed-certs-190398"
	I0815 01:34:09.387133   67000 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-190398"
	I0815 01:34:09.387144   67000 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-190398"
	W0815 01:34:09.387147   67000 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:34:09.387165   67000 addons.go:69] Setting metrics-server=true in profile "embed-certs-190398"
	I0815 01:34:09.387178   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.387189   67000 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:34:09.387205   67000 addons.go:234] Setting addon metrics-server=true in "embed-certs-190398"
	W0815 01:34:09.387216   67000 addons.go:243] addon metrics-server should already be in state true
	I0815 01:34:09.387253   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.387571   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387601   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.387577   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387681   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387729   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.387799   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.388556   67000 out.go:177] * Verifying Kubernetes components...
	I0815 01:34:09.389872   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:34:09.404358   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0815 01:34:09.404925   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.405016   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0815 01:34:09.405505   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.405526   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.405530   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.405878   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.405982   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.405993   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.406352   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.406418   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I0815 01:34:09.406460   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.406477   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.406755   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.406839   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.406876   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.407171   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.407189   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.407518   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.407712   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.412572   67000 addons.go:234] Setting addon default-storageclass=true in "embed-certs-190398"
	W0815 01:34:09.412597   67000 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:34:09.412626   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.413018   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.413049   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.427598   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0815 01:34:09.428087   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.428619   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.428645   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.429079   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.429290   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.430391   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34763
	I0815 01:34:09.430978   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.431199   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.431477   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.431489   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.431839   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.431991   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.433073   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0815 01:34:09.433473   67000 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:34:09.433726   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.433849   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.434259   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.434433   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.434786   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.434987   67000 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:09.435005   67000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:34:09.435026   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.435675   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.435700   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.435887   67000 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:34:05.760130   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:07.760774   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:09.762245   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:09.437621   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:34:09.437643   67000 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:34:09.437664   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.438723   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.439409   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.439431   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.439685   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.439898   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.440245   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.440419   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.440609   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.441353   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.441380   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.441558   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.441712   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.441859   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.441957   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.455864   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
	I0815 01:34:09.456238   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.456858   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.456878   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.457179   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.457413   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.459002   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.459268   67000 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:09.459282   67000 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:34:09.459296   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.461784   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.462170   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.462203   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.462317   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.462491   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.462631   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.462772   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.602215   67000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:34:09.621687   67000 node_ready.go:35] waiting up to 6m0s for node "embed-certs-190398" to be "Ready" ...
	I0815 01:34:09.635114   67000 node_ready.go:49] node "embed-certs-190398" has status "Ready":"True"
	I0815 01:34:09.635146   67000 node_ready.go:38] duration metric: took 13.422205ms for node "embed-certs-190398" to be "Ready" ...
	I0815 01:34:09.635169   67000 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:09.642293   67000 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:09.681219   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:34:09.681242   67000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:34:09.725319   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:34:09.725353   67000 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:34:09.725445   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:09.758901   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:09.758973   67000 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:34:09.809707   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:09.831765   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:10.013580   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.013607   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.013902   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:10.013933   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.013950   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.013968   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.013979   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.014212   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.014227   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.023286   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.023325   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.023618   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.023643   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.023655   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.121834   67000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.312088989s)
	I0815 01:34:11.121883   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.121896   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.122269   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.122304   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.122324   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.122340   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.122354   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.122588   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.122605   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.183170   67000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.351356186s)
	I0815 01:34:11.183232   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.183248   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.183588   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.183604   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.183608   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.183619   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.183627   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.183989   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.184017   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.184031   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.184053   67000 addons.go:475] Verifying addon metrics-server=true in "embed-certs-190398"
	I0815 01:34:11.186460   67000 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0815 01:34:12.261636   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.763849   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:11.187572   67000 addons.go:510] duration metric: took 1.800554463s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0815 01:34:11.653997   67000 pod_ready.go:102] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.149672   67000 pod_ready.go:102] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.652753   67000 pod_ready.go:92] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:14.652782   67000 pod_ready.go:81] duration metric: took 5.0104594s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:14.652794   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:16.662387   67000 pod_ready.go:102] pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:17.158847   67000 pod_ready.go:92] pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.158877   67000 pod_ready.go:81] duration metric: took 2.50607523s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.158895   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.163274   67000 pod_ready.go:92] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.163295   67000 pod_ready.go:81] duration metric: took 4.392165ms for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.163307   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7hfvr" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.167416   67000 pod_ready.go:92] pod "kube-proxy-7hfvr" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.167436   67000 pod_ready.go:81] duration metric: took 4.122023ms for pod "kube-proxy-7hfvr" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.167447   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.171559   67000 pod_ready.go:92] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.171578   67000 pod_ready.go:81] duration metric: took 4.12361ms for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.171587   67000 pod_ready.go:38] duration metric: took 7.536405023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:17.171605   67000 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:34:17.171665   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:34:17.187336   67000 api_server.go:72] duration metric: took 7.800338922s to wait for apiserver process to appear ...
	I0815 01:34:17.187359   67000 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:34:17.187379   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:34:17.191804   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0815 01:34:17.192705   67000 api_server.go:141] control plane version: v1.31.0
	I0815 01:34:17.192726   67000 api_server.go:131] duration metric: took 5.35969ms to wait for apiserver health ...
	I0815 01:34:17.192739   67000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:34:17.197588   67000 system_pods.go:59] 9 kube-system pods found
	I0815 01:34:17.197618   67000 system_pods.go:61] "coredns-6f6b679f8f-kmmdc" [455019d9-07b5-418e-8668-26272424e96c] Running
	I0815 01:34:17.197626   67000 system_pods.go:61] "coredns-6f6b679f8f-kx2xv" [81e26858-a527-4f0d-a7fd-e5c3f82b29bc] Running
	I0815 01:34:17.197632   67000 system_pods.go:61] "etcd-embed-certs-190398" [0767f386-4cff-4c02-9c5c-ec334dd15d3d] Running
	I0815 01:34:17.197638   67000 system_pods.go:61] "kube-apiserver-embed-certs-190398" [737db54b-50eb-4fea-93a0-7e95d645b77f] Running
	I0815 01:34:17.197644   67000 system_pods.go:61] "kube-controller-manager-embed-certs-190398" [4767eb26-47a6-4dfd-833a-a4e18a57cb7e] Running
	I0815 01:34:17.197649   67000 system_pods.go:61] "kube-proxy-7hfvr" [ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0] Running
	I0815 01:34:17.197655   67000 system_pods.go:61] "kube-scheduler-embed-certs-190398" [0ffcf10e-304e-4837-bd6f-c3b78193b378] Running
	I0815 01:34:17.197665   67000 system_pods.go:61] "metrics-server-6867b74b74-4ldv7" [ea1c5492-373d-445c-a135-b91569186449] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:17.197676   67000 system_pods.go:61] "storage-provisioner" [002656ed-b542-442d-9409-6f0b5cf557dc] Running
	I0815 01:34:17.197688   67000 system_pods.go:74] duration metric: took 4.940904ms to wait for pod list to return data ...
	I0815 01:34:17.197699   67000 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:34:17.200172   67000 default_sa.go:45] found service account: "default"
	I0815 01:34:17.200190   67000 default_sa.go:55] duration metric: took 2.484111ms for default service account to be created ...
	I0815 01:34:17.200198   67000 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:34:17.359981   67000 system_pods.go:86] 9 kube-system pods found
	I0815 01:34:17.360011   67000 system_pods.go:89] "coredns-6f6b679f8f-kmmdc" [455019d9-07b5-418e-8668-26272424e96c] Running
	I0815 01:34:17.360019   67000 system_pods.go:89] "coredns-6f6b679f8f-kx2xv" [81e26858-a527-4f0d-a7fd-e5c3f82b29bc] Running
	I0815 01:34:17.360025   67000 system_pods.go:89] "etcd-embed-certs-190398" [0767f386-4cff-4c02-9c5c-ec334dd15d3d] Running
	I0815 01:34:17.360030   67000 system_pods.go:89] "kube-apiserver-embed-certs-190398" [737db54b-50eb-4fea-93a0-7e95d645b77f] Running
	I0815 01:34:17.360036   67000 system_pods.go:89] "kube-controller-manager-embed-certs-190398" [4767eb26-47a6-4dfd-833a-a4e18a57cb7e] Running
	I0815 01:34:17.360042   67000 system_pods.go:89] "kube-proxy-7hfvr" [ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0] Running
	I0815 01:34:17.360047   67000 system_pods.go:89] "kube-scheduler-embed-certs-190398" [0ffcf10e-304e-4837-bd6f-c3b78193b378] Running
	I0815 01:34:17.360058   67000 system_pods.go:89] "metrics-server-6867b74b74-4ldv7" [ea1c5492-373d-445c-a135-b91569186449] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:17.360065   67000 system_pods.go:89] "storage-provisioner" [002656ed-b542-442d-9409-6f0b5cf557dc] Running
	I0815 01:34:17.360078   67000 system_pods.go:126] duration metric: took 159.873802ms to wait for k8s-apps to be running ...
	I0815 01:34:17.360091   67000 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:34:17.360143   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:17.374912   67000 system_svc.go:56] duration metric: took 14.811351ms WaitForService to wait for kubelet
	I0815 01:34:17.374948   67000 kubeadm.go:582] duration metric: took 7.987952187s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:34:17.374977   67000 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:34:17.557650   67000 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:34:17.557681   67000 node_conditions.go:123] node cpu capacity is 2
	I0815 01:34:17.557694   67000 node_conditions.go:105] duration metric: took 182.710819ms to run NodePressure ...
	I0815 01:34:17.557706   67000 start.go:241] waiting for startup goroutines ...
	I0815 01:34:17.557716   67000 start.go:246] waiting for cluster config update ...
	I0815 01:34:17.557728   67000 start.go:255] writing updated cluster config ...
	I0815 01:34:17.557999   67000 ssh_runner.go:195] Run: rm -f paused
	I0815 01:34:17.605428   67000 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:34:17.607344   67000 out.go:177] * Done! kubectl is now configured to use "embed-certs-190398" cluster and "default" namespace by default
	I0815 01:34:17.260406   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:19.260601   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:19.754935   66492 pod_ready.go:81] duration metric: took 4m0.000339545s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" ...
	E0815 01:34:19.754964   66492 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 01:34:19.754984   66492 pod_ready.go:38] duration metric: took 4m6.506948914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:19.755018   66492 kubeadm.go:597] duration metric: took 4m13.922875877s to restartPrimaryControlPlane
	W0815 01:34:19.755082   66492 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:34:19.755112   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:34:45.859009   66492 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.103872856s)
	I0815 01:34:45.859088   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:45.875533   66492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:34:45.885287   66492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:34:45.897067   66492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:34:45.897087   66492 kubeadm.go:157] found existing configuration files:
	
	I0815 01:34:45.897137   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:34:45.907073   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:34:45.907145   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:34:45.916110   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:34:45.925269   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:34:45.925330   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:34:45.934177   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:34:45.942464   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:34:45.942524   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:34:45.951504   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:34:45.961107   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:34:45.961159   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:34:45.970505   66492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:34:46.018530   66492 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:34:46.018721   66492 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:46.125710   66492 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:46.125846   66492 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:46.125961   66492 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:34:46.134089   66492 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:46.135965   66492 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:46.136069   66492 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:46.136157   66492 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:46.136256   66492 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:46.136333   66492 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:46.136442   66492 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:46.136528   66492 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:46.136614   66492 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:46.136736   66492 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:46.136845   66492 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:46.136946   66492 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:46.137066   66492 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:46.137143   66492 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:46.289372   66492 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:46.547577   66492 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:34:46.679039   66492 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:47.039625   66492 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:47.355987   66492 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:47.356514   66492 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:47.359155   66492 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:47.360813   66492 out.go:204]   - Booting up control plane ...
	I0815 01:34:47.360924   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:47.361018   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:47.361140   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:47.386603   66492 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:47.395339   66492 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:47.395391   66492 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:47.526381   66492 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:34:47.526512   66492 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:34:48.027552   66492 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.152677ms
	I0815 01:34:48.027674   66492 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:34:53.029526   66492 kubeadm.go:310] [api-check] The API server is healthy after 5.001814093s
	I0815 01:34:53.043123   66492 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:34:53.061171   66492 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:34:53.093418   66492 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:34:53.093680   66492 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-884893 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:34:53.106103   66492 kubeadm.go:310] [bootstrap-token] Using token: rd520d.rc6325cjita43il4
	I0815 01:34:53.107576   66492 out.go:204]   - Configuring RBAC rules ...
	I0815 01:34:53.107717   66492 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:34:53.112060   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:34:53.122816   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:34:53.126197   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:34:53.129304   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:34:53.133101   66492 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:34:53.436427   66492 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:34:53.891110   66492 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:34:54.439955   66492 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:34:54.441369   66492 kubeadm.go:310] 
	I0815 01:34:54.441448   66492 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:34:54.441457   66492 kubeadm.go:310] 
	I0815 01:34:54.441550   66492 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:34:54.441578   66492 kubeadm.go:310] 
	I0815 01:34:54.441608   66492 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:34:54.441663   66492 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:34:54.441705   66492 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:34:54.441711   66492 kubeadm.go:310] 
	I0815 01:34:54.441777   66492 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:34:54.441784   66492 kubeadm.go:310] 
	I0815 01:34:54.441821   66492 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:34:54.441828   66492 kubeadm.go:310] 
	I0815 01:34:54.441867   66492 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:34:54.441977   66492 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:34:54.442054   66492 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:34:54.442061   66492 kubeadm.go:310] 
	I0815 01:34:54.442149   66492 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:34:54.442255   66492 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:34:54.442265   66492 kubeadm.go:310] 
	I0815 01:34:54.442384   66492 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rd520d.rc6325cjita43il4 \
	I0815 01:34:54.442477   66492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:34:54.442504   66492 kubeadm.go:310] 	--control-plane 
	I0815 01:34:54.442509   66492 kubeadm.go:310] 
	I0815 01:34:54.442591   66492 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:34:54.442598   66492 kubeadm.go:310] 
	I0815 01:34:54.442675   66492 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rd520d.rc6325cjita43il4 \
	I0815 01:34:54.442811   66492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:34:54.444409   66492 kubeadm.go:310] W0815 01:34:45.989583    3035 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:54.444785   66492 kubeadm.go:310] W0815 01:34:45.990491    3035 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:54.444929   66492 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:54.444951   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:34:54.444960   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:34:54.447029   66492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:34:54.448357   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:34:54.460176   66492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:34:54.479219   66492 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:34:54.479299   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:54.479342   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-884893 minikube.k8s.io/updated_at=2024_08_15T01_34_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=no-preload-884893 minikube.k8s.io/primary=true
	I0815 01:34:54.516528   66492 ops.go:34] apiserver oom_adj: -16
	I0815 01:34:54.686689   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:55.186918   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:55.687118   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:56.186740   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:56.687051   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:57.187582   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:57.687662   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:58.187633   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:58.686885   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:59.187093   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:59.280930   66492 kubeadm.go:1113] duration metric: took 4.801695567s to wait for elevateKubeSystemPrivileges
	I0815 01:34:59.280969   66492 kubeadm.go:394] duration metric: took 4m53.494095639s to StartCluster
	I0815 01:34:59.281006   66492 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:59.281099   66492 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:34:59.283217   66492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:59.283528   66492 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:34:59.283693   66492 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:34:59.283649   66492 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:34:59.283734   66492 addons.go:69] Setting storage-provisioner=true in profile "no-preload-884893"
	I0815 01:34:59.283743   66492 addons.go:69] Setting metrics-server=true in profile "no-preload-884893"
	I0815 01:34:59.283742   66492 addons.go:69] Setting default-storageclass=true in profile "no-preload-884893"
	I0815 01:34:59.283768   66492 addons.go:234] Setting addon metrics-server=true in "no-preload-884893"
	I0815 01:34:59.283770   66492 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-884893"
	I0815 01:34:59.283768   66492 addons.go:234] Setting addon storage-provisioner=true in "no-preload-884893"
	W0815 01:34:59.283882   66492 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:34:59.283912   66492 host.go:66] Checking if "no-preload-884893" exists ...
	W0815 01:34:59.283778   66492 addons.go:243] addon metrics-server should already be in state true
	I0815 01:34:59.283990   66492 host.go:66] Checking if "no-preload-884893" exists ...
	I0815 01:34:59.284206   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284238   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.284296   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284321   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.284333   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284347   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.285008   66492 out.go:177] * Verifying Kubernetes components...
	I0815 01:34:59.286336   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:34:59.302646   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I0815 01:34:59.302810   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0815 01:34:59.303084   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303243   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303327   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0815 01:34:59.303705   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.303724   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.303864   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303911   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.303939   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.304044   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304378   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.304397   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.304418   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304643   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.304695   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.304899   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.304912   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304926   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.305098   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.308826   66492 addons.go:234] Setting addon default-storageclass=true in "no-preload-884893"
	W0815 01:34:59.308848   66492 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:34:59.308878   66492 host.go:66] Checking if "no-preload-884893" exists ...
	I0815 01:34:59.309223   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.309255   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.320605   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0815 01:34:59.321021   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.321570   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.321591   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.321942   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.322163   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.323439   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0815 01:34:59.323779   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.324027   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.324168   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.324180   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.324446   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.324885   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.324914   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.325881   66492 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:34:59.326695   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0815 01:34:59.327054   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.327257   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:34:59.327286   66492 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:34:59.327304   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.327551   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.327567   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.327935   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.328243   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.330384   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.330975   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.331491   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.331519   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.331747   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.331916   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.331916   66492 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:34:59.563745   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:34:59.563904   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:34:59.565631   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:34:59.565711   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:59.565827   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:59.565968   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:59.566095   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:34:59.566195   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:59.567850   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:59.567922   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:59.567991   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:59.568091   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:59.568176   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:59.568283   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:59.568377   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:59.568466   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:59.568558   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:59.568674   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:59.568775   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:59.568834   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:59.568920   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:59.568998   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:59.569073   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:59.569162   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:59.569217   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:59.569330   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:59.569429   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:59.569482   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:59.569580   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:59.571031   66919 out.go:204]   - Booting up control plane ...
	I0815 01:34:59.571120   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:59.571198   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:59.571286   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:59.571396   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:59.571643   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:34:59.571729   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:34:59.571830   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572069   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572172   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572422   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572540   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572814   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572913   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573155   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573252   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573474   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573484   66919 kubeadm.go:310] 
	I0815 01:34:59.573543   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:34:59.573601   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:34:59.573610   66919 kubeadm.go:310] 
	I0815 01:34:59.573667   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:34:59.573713   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:34:59.573862   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:34:59.573878   66919 kubeadm.go:310] 
	I0815 01:34:59.574000   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:34:59.574051   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:34:59.574099   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:34:59.574109   66919 kubeadm.go:310] 
	I0815 01:34:59.574262   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:34:59.574379   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:34:59.574387   66919 kubeadm.go:310] 
	I0815 01:34:59.574509   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:34:59.574646   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:34:59.574760   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:34:59.574862   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:34:59.574880   66919 kubeadm.go:310] 
	W0815 01:34:59.574991   66919 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 01:34:59.575044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:35:00.029701   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:00.047125   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:35:00.057309   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:35:00.057336   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:35:00.057396   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:35:00.066837   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:35:00.066901   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:35:00.076722   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:35:00.086798   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:35:00.086862   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:35:00.097486   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.109900   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:35:00.109981   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.122672   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:34:59.332080   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.332258   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.333212   66492 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:59.333230   66492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:34:59.333246   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.336201   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.336699   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.336761   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.336791   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.336965   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.337146   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.337319   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.343978   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42433
	I0815 01:34:59.344425   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.344992   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.345015   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.345400   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.345595   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.347262   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.347490   66492 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:59.347507   66492 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:34:59.347525   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.350390   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.350876   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.350899   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.351072   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.351243   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.351418   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.351543   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.471077   66492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:34:59.500097   66492 node_ready.go:35] waiting up to 6m0s for node "no-preload-884893" to be "Ready" ...
	I0815 01:34:59.509040   66492 node_ready.go:49] node "no-preload-884893" has status "Ready":"True"
	I0815 01:34:59.509063   66492 node_ready.go:38] duration metric: took 8.924177ms for node "no-preload-884893" to be "Ready" ...
	I0815 01:34:59.509075   66492 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:59.515979   66492 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:59.594834   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:34:59.594856   66492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:34:59.597457   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:59.603544   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:59.637080   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:34:59.637109   66492 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:34:59.683359   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:59.683388   66492 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:34:59.730096   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:35:00.403252   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403287   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403477   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403495   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403789   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.403829   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.403850   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403858   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.403868   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403876   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.403891   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403900   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.404115   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.404156   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.404158   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.404162   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.404177   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.404164   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.433823   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.433876   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.434285   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.434398   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.434420   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.674979   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.675008   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.675371   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.675395   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.675421   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.675434   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.675443   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.675706   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.675722   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.675733   66492 addons.go:475] Verifying addon metrics-server=true in "no-preload-884893"
	I0815 01:35:00.677025   66492 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 01:35:00.134512   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:35:00.134579   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:35:00.146901   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:35:00.384725   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:35:00.678492   66492 addons.go:510] duration metric: took 1.394848534s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 01:35:01.522738   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:04.022711   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:06.522906   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:08.523426   66492 pod_ready.go:92] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.523453   66492 pod_ready.go:81] duration metric: took 9.007444319s for pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.523465   66492 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.528447   66492 pod_ready.go:92] pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.528471   66492 pod_ready.go:81] duration metric: took 4.997645ms for pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.528480   66492 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.533058   66492 pod_ready.go:92] pod "etcd-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.533078   66492 pod_ready.go:81] duration metric: took 4.59242ms for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.533088   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.537231   66492 pod_ready.go:92] pod "kube-apiserver-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.537252   66492 pod_ready.go:81] duration metric: took 4.154988ms for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.537261   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.541819   66492 pod_ready.go:92] pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.541840   66492 pod_ready.go:81] duration metric: took 4.572636ms for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.541852   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dpggv" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.920356   66492 pod_ready.go:92] pod "kube-proxy-dpggv" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.920394   66492 pod_ready.go:81] duration metric: took 378.534331ms for pod "kube-proxy-dpggv" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.920407   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:09.320112   66492 pod_ready.go:92] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:09.320135   66492 pod_ready.go:81] duration metric: took 399.72085ms for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:09.320143   66492 pod_ready.go:38] duration metric: took 9.811056504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:35:09.320158   66492 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:35:09.320216   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:35:09.336727   66492 api_server.go:72] duration metric: took 10.053160882s to wait for apiserver process to appear ...
	I0815 01:35:09.336760   66492 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:35:09.336777   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:35:09.340897   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 200:
	ok
	I0815 01:35:09.341891   66492 api_server.go:141] control plane version: v1.31.0
	I0815 01:35:09.341911   66492 api_server.go:131] duration metric: took 5.145922ms to wait for apiserver health ...
	I0815 01:35:09.341919   66492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:35:09.523808   66492 system_pods.go:59] 9 kube-system pods found
	I0815 01:35:09.523839   66492 system_pods.go:61] "coredns-6f6b679f8f-srq48" [e9520ab8-24d6-410d-bcba-b59e91e817a9] Running
	I0815 01:35:09.523844   66492 system_pods.go:61] "coredns-6f6b679f8f-t77b6" [fcdf11ef-28a6-428c-b033-e29b51af8f0e] Running
	I0815 01:35:09.523848   66492 system_pods.go:61] "etcd-no-preload-884893" [fa960cfe-331d-4656-93e9-a58921bd62de] Running
	I0815 01:35:09.523851   66492 system_pods.go:61] "kube-apiserver-no-preload-884893" [7a8244fb-aa58-4e8e-957a-f3fbd388837b] Running
	I0815 01:35:09.523857   66492 system_pods.go:61] "kube-controller-manager-no-preload-884893" [0b6c5424-6fe4-42b6-b081-4409f90db35f] Running
	I0815 01:35:09.523860   66492 system_pods.go:61] "kube-proxy-dpggv" [55ef2a4b-a502-452d-a3bd-df1209ff247b] Running
	I0815 01:35:09.523863   66492 system_pods.go:61] "kube-scheduler-no-preload-884893" [cd295ee0-1897-4cd3-896d-09dd36842248] Running
	I0815 01:35:09.523871   66492 system_pods.go:61] "metrics-server-6867b74b74-w47b2" [7423be62-ae01-4b3f-9e24-049f4788f32f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:35:09.523875   66492 system_pods.go:61] "storage-provisioner" [b4cf6d02-281f-4fb5-9ff7-c36143d3af58] Running
	I0815 01:35:09.523883   66492 system_pods.go:74] duration metric: took 181.959474ms to wait for pod list to return data ...
	I0815 01:35:09.523892   66492 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:35:09.720531   66492 default_sa.go:45] found service account: "default"
	I0815 01:35:09.720565   66492 default_sa.go:55] duration metric: took 196.667806ms for default service account to be created ...
	I0815 01:35:09.720574   66492 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:35:09.923419   66492 system_pods.go:86] 9 kube-system pods found
	I0815 01:35:09.923454   66492 system_pods.go:89] "coredns-6f6b679f8f-srq48" [e9520ab8-24d6-410d-bcba-b59e91e817a9] Running
	I0815 01:35:09.923463   66492 system_pods.go:89] "coredns-6f6b679f8f-t77b6" [fcdf11ef-28a6-428c-b033-e29b51af8f0e] Running
	I0815 01:35:09.923471   66492 system_pods.go:89] "etcd-no-preload-884893" [fa960cfe-331d-4656-93e9-a58921bd62de] Running
	I0815 01:35:09.923477   66492 system_pods.go:89] "kube-apiserver-no-preload-884893" [7a8244fb-aa58-4e8e-957a-f3fbd388837b] Running
	I0815 01:35:09.923484   66492 system_pods.go:89] "kube-controller-manager-no-preload-884893" [0b6c5424-6fe4-42b6-b081-4409f90db35f] Running
	I0815 01:35:09.923490   66492 system_pods.go:89] "kube-proxy-dpggv" [55ef2a4b-a502-452d-a3bd-df1209ff247b] Running
	I0815 01:35:09.923494   66492 system_pods.go:89] "kube-scheduler-no-preload-884893" [cd295ee0-1897-4cd3-896d-09dd36842248] Running
	I0815 01:35:09.923502   66492 system_pods.go:89] "metrics-server-6867b74b74-w47b2" [7423be62-ae01-4b3f-9e24-049f4788f32f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:35:09.923509   66492 system_pods.go:89] "storage-provisioner" [b4cf6d02-281f-4fb5-9ff7-c36143d3af58] Running
	I0815 01:35:09.923524   66492 system_pods.go:126] duration metric: took 202.943928ms to wait for k8s-apps to be running ...
	I0815 01:35:09.923533   66492 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:35:09.923586   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:09.938893   66492 system_svc.go:56] duration metric: took 15.353021ms WaitForService to wait for kubelet
	I0815 01:35:09.938917   66492 kubeadm.go:582] duration metric: took 10.655355721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:35:09.938942   66492 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:35:10.120692   66492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:35:10.120717   66492 node_conditions.go:123] node cpu capacity is 2
	I0815 01:35:10.120728   66492 node_conditions.go:105] duration metric: took 181.7794ms to run NodePressure ...
	I0815 01:35:10.120739   66492 start.go:241] waiting for startup goroutines ...
	I0815 01:35:10.120746   66492 start.go:246] waiting for cluster config update ...
	I0815 01:35:10.120754   66492 start.go:255] writing updated cluster config ...
	I0815 01:35:10.121019   66492 ssh_runner.go:195] Run: rm -f paused
	I0815 01:35:10.172726   66492 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:35:10.174631   66492 out.go:177] * Done! kubectl is now configured to use "no-preload-884893" cluster and "default" namespace by default
	I0815 01:36:56.608471   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:36:56.608611   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:36:56.610133   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:36:56.610200   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:36:56.610290   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:36:56.610405   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:36:56.610524   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:36:56.610616   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:36:56.612092   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:36:56.612184   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:36:56.612246   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:36:56.612314   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:36:56.612371   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:36:56.612431   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:36:56.612482   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:36:56.612534   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:36:56.612585   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:36:56.612697   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:36:56.612796   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:36:56.612859   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:36:56.613044   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:36:56.613112   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:36:56.613157   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:36:56.613244   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:36:56.613322   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:36:56.613455   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:36:56.613565   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:36:56.613631   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:36:56.613729   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:36:56.615023   66919 out.go:204]   - Booting up control plane ...
	I0815 01:36:56.615129   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:36:56.615203   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:36:56.615260   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:36:56.615330   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:36:56.615485   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:36:56.615542   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:36:56.615620   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.615805   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.615892   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616085   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616149   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616297   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616355   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616555   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616646   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616833   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616842   66919 kubeadm.go:310] 
	I0815 01:36:56.616873   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:36:56.616905   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:36:56.616912   66919 kubeadm.go:310] 
	I0815 01:36:56.616939   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:36:56.616969   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:36:56.617073   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:36:56.617089   66919 kubeadm.go:310] 
	I0815 01:36:56.617192   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:36:56.617220   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:36:56.617255   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:36:56.617263   66919 kubeadm.go:310] 
	I0815 01:36:56.617393   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:36:56.617469   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:36:56.617478   66919 kubeadm.go:310] 
	I0815 01:36:56.617756   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:36:56.617889   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:36:56.617967   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:36:56.618057   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:36:56.618070   66919 kubeadm.go:310] 
	I0815 01:36:56.618125   66919 kubeadm.go:394] duration metric: took 8m2.571608887s to StartCluster
	I0815 01:36:56.618169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:36:56.618222   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:36:56.659324   66919 cri.go:89] found id: ""
	I0815 01:36:56.659353   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.659365   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:36:56.659372   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:36:56.659443   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:36:56.695979   66919 cri.go:89] found id: ""
	I0815 01:36:56.696003   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.696010   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:36:56.696015   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:36:56.696063   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:36:56.730063   66919 cri.go:89] found id: ""
	I0815 01:36:56.730092   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.730100   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:36:56.730106   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:36:56.730161   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:36:56.763944   66919 cri.go:89] found id: ""
	I0815 01:36:56.763969   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.763983   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:36:56.763988   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:36:56.764047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:36:56.798270   66919 cri.go:89] found id: ""
	I0815 01:36:56.798299   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.798307   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:36:56.798313   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:36:56.798366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:36:56.832286   66919 cri.go:89] found id: ""
	I0815 01:36:56.832318   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.832328   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:36:56.832335   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:36:56.832410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:36:56.866344   66919 cri.go:89] found id: ""
	I0815 01:36:56.866380   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.866390   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:36:56.866398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:36:56.866461   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:36:56.904339   66919 cri.go:89] found id: ""
	I0815 01:36:56.904366   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.904375   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:36:56.904387   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:36:56.904405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:36:56.982024   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:36:56.982045   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:36:56.982057   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:36:57.092250   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:36:57.092288   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:36:57.157548   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:36:57.157582   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:36:57.216511   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:36:57.216563   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 01:36:57.230210   66919 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 01:36:57.230256   66919 out.go:239] * 
	W0815 01:36:57.230316   66919 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.230347   66919 out.go:239] * 
	W0815 01:36:57.231157   66919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:36:57.234003   66919 out.go:177] 
	W0815 01:36:57.235088   66919 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.235127   66919 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 01:36:57.235146   66919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 01:36:57.236647   66919 out.go:177] 
	
	
	==> CRI-O <==
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.482041567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1601e884-894c-45f5-9e4b-ad667084206c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.482261571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685411582643300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91277761e8354d0469aff1995799cbbe87fb69a934b39d1a16eb8aaef4463e03,PodSandboxId:eb530c4afe1db9e09b54d1a05218807247888f8a08f1d6358ab09dd8dfd306e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723685391215065734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a262790f-9f48-41d8-ac94-90f4f9e60087,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b,PodSandboxId:76dceb9cb96ddaa34e162f65928a3338af250c468ca8a6bddde14f3d1c8d0d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685388428618166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gxdqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8541f1-a07e-4d34-80ae-f7b2529b560b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6,PodSandboxId:e9cf9f72683fd7d6ca51d895dd765c3acc38b8226aeaaa8ab8da61bae766f084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685380862388453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s8mfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6897db99-a
461-4261-a7b4-17f13c72a724,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723685380782374985,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-
7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771,PodSandboxId:24db94d899f54624e576732363c5ccb02af6ccd0681f53ef8c7d103d44030416,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685376248763843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973ebf14322aafa70988c1
d6c9514109,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049,PodSandboxId:ab70c54bebffcd4f1c2c21bf2ab10bf06ae2df230446af80f22c8bb667881871,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685376247296172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 9e179917b807224665cb9060b1088131,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872,PodSandboxId:c255231cfd07789193c3b191fa9f31c35cce8cb1223a2e782ec722d68bae6703,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685376225530549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7895bb76a3dbe7d8ea2d01f06cb04
572,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0,PodSandboxId:4c7ee67c2d22350bc274710b11c8d2b0165d0bc2855d7400e1cf9b5133419cdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685376233177246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8d93b60baefc4b535da87456e33f
3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1601e884-894c-45f5-9e4b-ad667084206c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.517548820Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed45bdd4-8326-4d7c-a3ad-5c11877ff70a name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.517640871Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed45bdd4-8326-4d7c-a3ad-5c11877ff70a name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.519158427Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8ce9e76-5fc5-4eb6-b24e-c3ace084c9a0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.519674339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686186519649056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8ce9e76-5fc5-4eb6-b24e-c3ace084c9a0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.520265512Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd45e60d-747a-4709-beeb-b67c61b4290d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.520320036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd45e60d-747a-4709-beeb-b67c61b4290d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.520872437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685411582643300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91277761e8354d0469aff1995799cbbe87fb69a934b39d1a16eb8aaef4463e03,PodSandboxId:eb530c4afe1db9e09b54d1a05218807247888f8a08f1d6358ab09dd8dfd306e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723685391215065734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a262790f-9f48-41d8-ac94-90f4f9e60087,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b,PodSandboxId:76dceb9cb96ddaa34e162f65928a3338af250c468ca8a6bddde14f3d1c8d0d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685388428618166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gxdqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8541f1-a07e-4d34-80ae-f7b2529b560b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6,PodSandboxId:e9cf9f72683fd7d6ca51d895dd765c3acc38b8226aeaaa8ab8da61bae766f084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685380862388453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s8mfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6897db99-a
461-4261-a7b4-17f13c72a724,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723685380782374985,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-
7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771,PodSandboxId:24db94d899f54624e576732363c5ccb02af6ccd0681f53ef8c7d103d44030416,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685376248763843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973ebf14322aafa70988c1
d6c9514109,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049,PodSandboxId:ab70c54bebffcd4f1c2c21bf2ab10bf06ae2df230446af80f22c8bb667881871,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685376247296172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 9e179917b807224665cb9060b1088131,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872,PodSandboxId:c255231cfd07789193c3b191fa9f31c35cce8cb1223a2e782ec722d68bae6703,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685376225530549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7895bb76a3dbe7d8ea2d01f06cb04
572,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0,PodSandboxId:4c7ee67c2d22350bc274710b11c8d2b0165d0bc2855d7400e1cf9b5133419cdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685376233177246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8d93b60baefc4b535da87456e33f
3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd45e60d-747a-4709-beeb-b67c61b4290d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.554105821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92379bed-a6b4-41c9-9ead-54b766f9ae31 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.554189240Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92379bed-a6b4-41c9-9ead-54b766f9ae31 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.554886540Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d6305bb-7ee7-4133-8e40-d284270e5220 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.555371850Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686186555343956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d6305bb-7ee7-4133-8e40-d284270e5220 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.555834960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7ce2981-3eea-4d9a-96ac-806ba97cea73 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.555899397Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7ce2981-3eea-4d9a-96ac-806ba97cea73 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.556207289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685411582643300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91277761e8354d0469aff1995799cbbe87fb69a934b39d1a16eb8aaef4463e03,PodSandboxId:eb530c4afe1db9e09b54d1a05218807247888f8a08f1d6358ab09dd8dfd306e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723685391215065734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a262790f-9f48-41d8-ac94-90f4f9e60087,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b,PodSandboxId:76dceb9cb96ddaa34e162f65928a3338af250c468ca8a6bddde14f3d1c8d0d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685388428618166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gxdqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8541f1-a07e-4d34-80ae-f7b2529b560b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6,PodSandboxId:e9cf9f72683fd7d6ca51d895dd765c3acc38b8226aeaaa8ab8da61bae766f084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685380862388453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s8mfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6897db99-a
461-4261-a7b4-17f13c72a724,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723685380782374985,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-
7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771,PodSandboxId:24db94d899f54624e576732363c5ccb02af6ccd0681f53ef8c7d103d44030416,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685376248763843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973ebf14322aafa70988c1
d6c9514109,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049,PodSandboxId:ab70c54bebffcd4f1c2c21bf2ab10bf06ae2df230446af80f22c8bb667881871,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685376247296172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 9e179917b807224665cb9060b1088131,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872,PodSandboxId:c255231cfd07789193c3b191fa9f31c35cce8cb1223a2e782ec722d68bae6703,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685376225530549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7895bb76a3dbe7d8ea2d01f06cb04
572,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0,PodSandboxId:4c7ee67c2d22350bc274710b11c8d2b0165d0bc2855d7400e1cf9b5133419cdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685376233177246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8d93b60baefc4b535da87456e33f
3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7ce2981-3eea-4d9a-96ac-806ba97cea73 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.587552792Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3d7558f-b9c6-43f6-9c28-45927ed73d71 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.587643379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3d7558f-b9c6-43f6-9c28-45927ed73d71 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.588702977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ef540e6-c8ce-41b1-86f1-df81b0840d4f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.589359088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686186589329545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ef540e6-c8ce-41b1-86f1-df81b0840d4f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.589928761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=018ecb9e-84c0-44ff-8fed-08e9fcc366cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.590013312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=018ecb9e-84c0-44ff-8fed-08e9fcc366cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.590299656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685411582643300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91277761e8354d0469aff1995799cbbe87fb69a934b39d1a16eb8aaef4463e03,PodSandboxId:eb530c4afe1db9e09b54d1a05218807247888f8a08f1d6358ab09dd8dfd306e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723685391215065734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a262790f-9f48-41d8-ac94-90f4f9e60087,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b,PodSandboxId:76dceb9cb96ddaa34e162f65928a3338af250c468ca8a6bddde14f3d1c8d0d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685388428618166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gxdqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8541f1-a07e-4d34-80ae-f7b2529b560b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6,PodSandboxId:e9cf9f72683fd7d6ca51d895dd765c3acc38b8226aeaaa8ab8da61bae766f084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685380862388453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s8mfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6897db99-a
461-4261-a7b4-17f13c72a724,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723685380782374985,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-
7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771,PodSandboxId:24db94d899f54624e576732363c5ccb02af6ccd0681f53ef8c7d103d44030416,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685376248763843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973ebf14322aafa70988c1
d6c9514109,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049,PodSandboxId:ab70c54bebffcd4f1c2c21bf2ab10bf06ae2df230446af80f22c8bb667881871,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685376247296172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 9e179917b807224665cb9060b1088131,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872,PodSandboxId:c255231cfd07789193c3b191fa9f31c35cce8cb1223a2e782ec722d68bae6703,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685376225530549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7895bb76a3dbe7d8ea2d01f06cb04
572,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0,PodSandboxId:4c7ee67c2d22350bc274710b11c8d2b0165d0bc2855d7400e1cf9b5133419cdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685376233177246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8d93b60baefc4b535da87456e33f
3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=018ecb9e-84c0-44ff-8fed-08e9fcc366cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.593793955Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=64857e89-3b0d-4fe6-a943-66b02fda1aec name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:06 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:43:06.594003803Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64857e89-3b0d-4fe6-a943-66b02fda1aec name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7e16ea21684b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   d8dc76e0e139c       storage-provisioner
	91277761e8354       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   eb530c4afe1db       busybox
	6878af069904e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   76dceb9cb96dd       coredns-6f6b679f8f-gxdqt
	451245c6ce878       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   e9cf9f72683fd       kube-proxy-s8mfb
	51d71abfa8f5c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   d8dc76e0e139c       storage-provisioner
	9aa794b86b772       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   24db94d899f54       kube-apiserver-default-k8s-diff-port-018537
	2f9821e596c0d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   ab70c54bebffc       kube-controller-manager-default-k8s-diff-port-018537
	a093f3ec7d6d1       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   4c7ee67c2d223       kube-scheduler-default-k8s-diff-port-018537
	e0cc07c948ffd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   c255231cfd077       etcd-default-k8s-diff-port-018537
	
	
	==> coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45202 - 35974 "HINFO IN 4574042729287797711.619855990244093827. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010305813s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-018537
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-018537
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=default-k8s-diff-port-018537
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T01_22_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 01:22:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-018537
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:43:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 01:40:23 +0000   Thu, 15 Aug 2024 01:22:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 01:40:23 +0000   Thu, 15 Aug 2024 01:22:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 01:40:23 +0000   Thu, 15 Aug 2024 01:22:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 01:40:23 +0000   Thu, 15 Aug 2024 01:29:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.223
	  Hostname:    default-k8s-diff-port-018537
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c194d510de044c42ad01b684edef68d1
	  System UUID:                c194d510-de04-4c42-ad01-b684edef68d1
	  Boot ID:                    49eb4833-ca02-4ac6-b00c-8451d140ab04
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-6f6b679f8f-gxdqt                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-018537                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-018537             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-018537    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-s8mfb                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-018537             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-6867b74b74-gdpxh                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasSufficientPID
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-018537 status is now: NodeReady
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-018537 event: Registered Node default-k8s-diff-port-018537 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-018537 event: Registered Node default-k8s-diff-port-018537 in Controller
	
	
	==> dmesg <==
	[Aug15 01:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051549] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038208] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.804256] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.886090] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.514859] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.754115] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.058239] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056208] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.190920] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.132338] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.293068] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.064359] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +1.738729] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +0.067767] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.527354] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.462666] systemd-fstab-generator[1552]: Ignoring "noauto" option for root device
	[  +3.213250] kauditd_printk_skb: 64 callbacks suppressed
	[Aug15 01:30] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] <==
	{"level":"info","ts":"2024-08-15T01:29:37.065721Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.223:2380"}
	{"level":"info","ts":"2024-08-15T01:29:37.072038Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T01:29:37.076388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4eb1782ea0e4b224","local-member-id":"dce4f6de3abdb6bd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:29:37.078033Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:29:37.078093Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.223:2380"}
	{"level":"info","ts":"2024-08-15T01:29:38.360397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T01:29:38.360532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T01:29:38.360581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd received MsgPreVoteResp from dce4f6de3abdb6bd at term 2"}
	{"level":"info","ts":"2024-08-15T01:29:38.360610Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T01:29:38.360647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd received MsgVoteResp from dce4f6de3abdb6bd at term 3"}
	{"level":"info","ts":"2024-08-15T01:29:38.360674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dce4f6de3abdb6bd became leader at term 3"}
	{"level":"info","ts":"2024-08-15T01:29:38.360699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dce4f6de3abdb6bd elected leader dce4f6de3abdb6bd at term 3"}
	{"level":"info","ts":"2024-08-15T01:29:38.363937Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dce4f6de3abdb6bd","local-member-attributes":"{Name:default-k8s-diff-port-018537 ClientURLs:[https://192.168.39.223:2379]}","request-path":"/0/members/dce4f6de3abdb6bd/attributes","cluster-id":"4eb1782ea0e4b224","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T01:29:38.364306Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:29:38.364468Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T01:29:38.364499Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T01:29:38.364574Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:29:38.365502Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:29:38.365529Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:29:38.366331Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.223:2379"}
	{"level":"info","ts":"2024-08-15T01:29:38.367088Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T01:30:00.109401Z","caller":"traceutil/trace.go:171","msg":"trace[94219129] transaction","detail":"{read_only:false; response_revision:655; number_of_response:1; }","duration":"147.273853ms","start":"2024-08-15T01:29:59.962108Z","end":"2024-08-15T01:30:00.109382Z","steps":["trace[94219129] 'process raft request'  (duration: 147.135371ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T01:39:38.396813Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":898}
	{"level":"info","ts":"2024-08-15T01:39:38.406875Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":898,"took":"9.747395ms","hash":3975383058,"current-db-size-bytes":2867200,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2867200,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-08-15T01:39:38.406928Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3975383058,"revision":898,"compact-revision":-1}
	
	
	==> kernel <==
	 01:43:06 up 13 min,  0 users,  load average: 0.01, 0.06, 0.07
	Linux default-k8s-diff-port-018537 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] <==
	W0815 01:39:40.736414       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:39:40.736512       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:39:40.737619       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:39:40.737672       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:40:40.738479       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:40:40.738583       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0815 01:40:40.738726       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:40:40.738768       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:40:40.739731       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:40:40.739798       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:42:40.739876       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:42:40.740022       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0815 01:42:40.740292       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:42:40.740463       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:42:40.741710       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:42:40.741721       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] <==
	E0815 01:37:43.305165       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:37:43.700322       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:38:13.310795       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:38:13.708277       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:38:43.317703       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:38:43.716425       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:39:13.330770       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:39:13.723360       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:39:43.338782       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:39:43.732948       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:40:13.344520       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:40:13.741259       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:40:23.649286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-018537"
	E0815 01:40:43.350534       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:40:43.748928       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:40:49.386434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="2.50974ms"
	I0815 01:41:01.380355       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="145.926µs"
	E0815 01:41:13.358386       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:41:13.756642       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:41:43.365674       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:41:43.764326       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:42:13.372664       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:42:13.772396       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:42:43.379598       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:42:43.780706       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:29:41.113509       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 01:29:41.123156       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.223"]
	E0815 01:29:41.123327       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 01:29:41.153899       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 01:29:41.154029       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 01:29:41.154113       1 server_linux.go:169] "Using iptables Proxier"
	I0815 01:29:41.156545       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 01:29:41.156843       1 server.go:483] "Version info" version="v1.31.0"
	I0815 01:29:41.156889       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:29:41.158340       1 config.go:197] "Starting service config controller"
	I0815 01:29:41.158407       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 01:29:41.158449       1 config.go:104] "Starting endpoint slice config controller"
	I0815 01:29:41.158465       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 01:29:41.160665       1 config.go:326] "Starting node config controller"
	I0815 01:29:41.160689       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 01:29:41.259499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 01:29:41.259519       1 shared_informer.go:320] Caches are synced for service config
	I0815 01:29:41.261031       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] <==
	I0815 01:29:37.246079       1 serving.go:386] Generated self-signed cert in-memory
	W0815 01:29:39.700623       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 01:29:39.700756       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 01:29:39.700827       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 01:29:39.700871       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 01:29:39.732061       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 01:29:39.732204       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:29:39.734470       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 01:29:39.734642       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 01:29:39.734680       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 01:29:39.734710       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 01:29:39.835018       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 01:41:54 default-k8s-diff-port-018537 kubelet[937]: E0815 01:41:54.568606     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686114568160445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:04 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:04.571044     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686124570582112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:04 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:04.571372     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686124570582112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:08 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:08.367714     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gdpxh" podUID="e263386d-fda4-4841-ace9-81a1ba4e8a81"
	Aug 15 01:42:14 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:14.572816     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686134572473369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:14 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:14.572915     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686134572473369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:20 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:20.367557     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gdpxh" podUID="e263386d-fda4-4841-ace9-81a1ba4e8a81"
	Aug 15 01:42:24 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:24.575470     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686144574964140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:24 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:24.575512     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686144574964140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:34 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:34.367326     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gdpxh" podUID="e263386d-fda4-4841-ace9-81a1ba4e8a81"
	Aug 15 01:42:34 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:34.397342     937 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 01:42:34 default-k8s-diff-port-018537 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 01:42:34 default-k8s-diff-port-018537 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 01:42:34 default-k8s-diff-port-018537 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 01:42:34 default-k8s-diff-port-018537 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 01:42:34 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:34.578092     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686154577695101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:34 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:34.578134     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686154577695101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:44 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:44.580214     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686164579820565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:44 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:44.580269     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686164579820565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:49 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:49.365754     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gdpxh" podUID="e263386d-fda4-4841-ace9-81a1ba4e8a81"
	Aug 15 01:42:54 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:54.582090     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686174581743164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:54 default-k8s-diff-port-018537 kubelet[937]: E0815 01:42:54.582132     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686174581743164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:00 default-k8s-diff-port-018537 kubelet[937]: E0815 01:43:00.366571     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gdpxh" podUID="e263386d-fda4-4841-ace9-81a1ba4e8a81"
	Aug 15 01:43:04 default-k8s-diff-port-018537 kubelet[937]: E0815 01:43:04.584171     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686184583729857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:04 default-k8s-diff-port-018537 kubelet[937]: E0815 01:43:04.584226     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686184583729857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] <==
	I0815 01:29:40.970652       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0815 01:30:10.977180       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] <==
	I0815 01:30:11.678456       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 01:30:11.687519       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 01:30:11.687615       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 01:30:29.085820       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 01:30:29.086063       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-018537_5780928b-b504-4fad-8f99-0862bbdbcc89!
	I0815 01:30:29.086626       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7c2ace39-2e0f-490f-b0d0-c568fba5964f", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-018537_5780928b-b504-4fad-8f99-0862bbdbcc89 became leader
	I0815 01:30:29.187622       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-018537_5780928b-b504-4fad-8f99-0862bbdbcc89!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-018537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-gdpxh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-018537 describe pod metrics-server-6867b74b74-gdpxh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-018537 describe pod metrics-server-6867b74b74-gdpxh: exit status 1 (63.890821ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-gdpxh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-018537 describe pod metrics-server-6867b74b74-gdpxh: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0815 01:34:41.523646   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-190398 -n embed-certs-190398
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-15 01:43:18.120520429 +0000 UTC m=+5872.820752032
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190398 -n embed-certs-190398
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-190398 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-190398 logs -n 25: (2.077123591s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:20 UTC |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-884893             | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-294760 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | disable-driver-mounts-294760                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:23 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-190398            | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-390782        | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018537  | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-884893                  | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-190398                 | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-390782             | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018537       | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:26 UTC | 15 Aug 24 01:34 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:26:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:26:05.128952   67451 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:26:05.129201   67451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:26:05.129210   67451 out.go:304] Setting ErrFile to fd 2...
	I0815 01:26:05.129214   67451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:26:05.129371   67451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:26:05.129877   67451 out.go:298] Setting JSON to false
	I0815 01:26:05.130775   67451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7710,"bootTime":1723677455,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:26:05.130828   67451 start.go:139] virtualization: kvm guest
	I0815 01:26:05.133200   67451 out.go:177] * [default-k8s-diff-port-018537] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:26:05.134520   67451 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:26:05.134534   67451 notify.go:220] Checking for updates...
	I0815 01:26:05.136725   67451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:26:05.137871   67451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:26:05.138973   67451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:26:05.140126   67451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:26:05.141168   67451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:26:05.142477   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:26:05.142872   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:26:05.142931   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:26:05.157398   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0815 01:26:05.157792   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:26:05.158237   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:26:05.158271   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:26:05.158625   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:26:05.158791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:26:05.158998   67451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:26:05.159268   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:26:05.159298   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:26:05.173332   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0815 01:26:05.173671   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:26:05.174063   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:26:05.174085   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:26:05.174378   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:26:05.174558   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:26:05.209931   67451 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 01:26:04.417005   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:05.210993   67451 start.go:297] selected driver: kvm2
	I0815 01:26:05.211005   67451 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:05.211106   67451 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:26:05.211778   67451 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:26:05.211854   67451 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:26:05.226770   67451 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:26:05.227141   67451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:26:05.227174   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:26:05.227182   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:26:05.227228   67451 start.go:340] cluster config:
	{Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:05.227335   67451 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:26:05.228866   67451 out.go:177] * Starting "default-k8s-diff-port-018537" primary control-plane node in "default-k8s-diff-port-018537" cluster
	I0815 01:26:05.229784   67451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:26:05.229818   67451 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:26:05.229826   67451 cache.go:56] Caching tarball of preloaded images
	I0815 01:26:05.229905   67451 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:26:05.229916   67451 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:26:05.230017   67451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:26:05.230223   67451 start.go:360] acquireMachinesLock for default-k8s-diff-port-018537: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:26:07.488887   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:13.568939   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:16.640954   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:22.720929   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:25.792889   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:31.872926   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:34.944895   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:41.024886   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:44.096913   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:50.176957   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:53.249017   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:59.328928   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:02.400891   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:08.480935   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:11.552904   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:17.632939   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:20.704876   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:26.784922   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:29.856958   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:35.936895   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:39.008957   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:45.088962   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:48.160964   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:54.240971   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:57.312935   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:03.393014   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:06.464973   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:12.544928   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:15.616915   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:21.696904   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:24.768924   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:27.773197   66919 start.go:364] duration metric: took 3m57.538488178s to acquireMachinesLock for "old-k8s-version-390782"
	I0815 01:28:27.773249   66919 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:27.773269   66919 fix.go:54] fixHost starting: 
	I0815 01:28:27.773597   66919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:27.773632   66919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:27.788757   66919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0815 01:28:27.789155   66919 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:27.789612   66919 main.go:141] libmachine: Using API Version  1
	I0815 01:28:27.789645   66919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:27.789952   66919 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:27.790122   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:27.790265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetState
	I0815 01:28:27.791742   66919 fix.go:112] recreateIfNeeded on old-k8s-version-390782: state=Stopped err=<nil>
	I0815 01:28:27.791773   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	W0815 01:28:27.791930   66919 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:27.793654   66919 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-390782" ...
	I0815 01:28:27.794650   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .Start
	I0815 01:28:27.794798   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring networks are active...
	I0815 01:28:27.795554   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network default is active
	I0815 01:28:27.795835   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network mk-old-k8s-version-390782 is active
	I0815 01:28:27.796194   66919 main.go:141] libmachine: (old-k8s-version-390782) Getting domain xml...
	I0815 01:28:27.797069   66919 main.go:141] libmachine: (old-k8s-version-390782) Creating domain...
	I0815 01:28:28.999562   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting to get IP...
	I0815 01:28:29.000288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.000697   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.000787   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.000698   67979 retry.go:31] will retry after 209.337031ms: waiting for machine to come up
	I0815 01:28:29.212345   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.212839   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.212865   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.212796   67979 retry.go:31] will retry after 252.542067ms: waiting for machine to come up
	I0815 01:28:29.467274   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.467659   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.467685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.467607   67979 retry.go:31] will retry after 412.932146ms: waiting for machine to come up
	I0815 01:28:29.882217   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.882643   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.882672   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.882601   67979 retry.go:31] will retry after 526.991017ms: waiting for machine to come up
	I0815 01:28:27.770766   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:27.770800   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:28:27.771142   66492 buildroot.go:166] provisioning hostname "no-preload-884893"
	I0815 01:28:27.771173   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:28:27.771381   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:28:27.773059   66492 machine.go:97] duration metric: took 4m37.432079731s to provisionDockerMachine
	I0815 01:28:27.773102   66492 fix.go:56] duration metric: took 4m37.453608342s for fixHost
	I0815 01:28:27.773107   66492 start.go:83] releasing machines lock for "no-preload-884893", held for 4m37.453640668s
	W0815 01:28:27.773125   66492 start.go:714] error starting host: provision: host is not running
	W0815 01:28:27.773209   66492 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0815 01:28:27.773219   66492 start.go:729] Will try again in 5 seconds ...
	I0815 01:28:30.411443   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:30.411819   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:30.411881   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:30.411794   67979 retry.go:31] will retry after 758.953861ms: waiting for machine to come up
	I0815 01:28:31.172721   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.173099   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.173131   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.173045   67979 retry.go:31] will retry after 607.740613ms: waiting for machine to come up
	I0815 01:28:31.782922   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.783406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.783434   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.783343   67979 retry.go:31] will retry after 738.160606ms: waiting for machine to come up
	I0815 01:28:32.523257   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:32.523685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:32.523716   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:32.523625   67979 retry.go:31] will retry after 904.54249ms: waiting for machine to come up
	I0815 01:28:33.430286   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:33.430690   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:33.430722   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:33.430637   67979 retry.go:31] will retry after 1.55058959s: waiting for machine to come up
	I0815 01:28:34.983386   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:34.983838   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:34.983870   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:34.983788   67979 retry.go:31] will retry after 1.636768205s: waiting for machine to come up
	I0815 01:28:32.775084   66492 start.go:360] acquireMachinesLock for no-preload-884893: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:28:36.622595   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:36.623058   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:36.623083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:36.622994   67979 retry.go:31] will retry after 1.777197126s: waiting for machine to come up
	I0815 01:28:38.401812   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:38.402289   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:38.402319   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:38.402247   67979 retry.go:31] will retry after 3.186960364s: waiting for machine to come up
	I0815 01:28:41.592635   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:41.593067   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:41.593093   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:41.593018   67979 retry.go:31] will retry after 3.613524245s: waiting for machine to come up
	I0815 01:28:46.469326   67000 start.go:364] duration metric: took 4m10.840663216s to acquireMachinesLock for "embed-certs-190398"
	I0815 01:28:46.469405   67000 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:46.469425   67000 fix.go:54] fixHost starting: 
	I0815 01:28:46.469913   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:46.469951   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:46.486446   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0815 01:28:46.486871   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:46.487456   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:28:46.487491   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:46.487832   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:46.488037   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:28:46.488198   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:28:46.489804   67000 fix.go:112] recreateIfNeeded on embed-certs-190398: state=Stopped err=<nil>
	I0815 01:28:46.489863   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	W0815 01:28:46.490033   67000 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:46.492240   67000 out.go:177] * Restarting existing kvm2 VM for "embed-certs-190398" ...
	I0815 01:28:45.209122   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.209617   66919 main.go:141] libmachine: (old-k8s-version-390782) Found IP for machine: 192.168.50.21
	I0815 01:28:45.209639   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserving static IP address...
	I0815 01:28:45.209657   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has current primary IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.210115   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserved static IP address: 192.168.50.21
	I0815 01:28:45.210138   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting for SSH to be available...
	I0815 01:28:45.210160   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.210188   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | skip adding static IP to network mk-old-k8s-version-390782 - found existing host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"}
	I0815 01:28:45.210204   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Getting to WaitForSSH function...
	I0815 01:28:45.212727   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213127   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.213153   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213307   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH client type: external
	I0815 01:28:45.213354   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa (-rw-------)
	I0815 01:28:45.213388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:28:45.213406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | About to run SSH command:
	I0815 01:28:45.213437   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | exit 0
	I0815 01:28:45.340616   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | SSH cmd err, output: <nil>: 
	I0815 01:28:45.341118   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetConfigRaw
	I0815 01:28:45.341848   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.344534   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.344934   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.344967   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.345196   66919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/config.json ...
	I0815 01:28:45.345414   66919 machine.go:94] provisionDockerMachine start ...
	I0815 01:28:45.345433   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:45.345699   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.347935   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348249   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.348278   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348438   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.348609   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348797   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348957   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.349117   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.349324   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.349337   66919 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:28:45.456668   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:28:45.456701   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.456959   66919 buildroot.go:166] provisioning hostname "old-k8s-version-390782"
	I0815 01:28:45.456987   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.457148   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.460083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460425   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.460453   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460613   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.460783   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.460924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.461039   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.461180   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.461392   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.461416   66919 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-390782 && echo "old-k8s-version-390782" | sudo tee /etc/hostname
	I0815 01:28:45.582108   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-390782
	
	I0815 01:28:45.582136   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.585173   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585556   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.585590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585795   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.585989   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586131   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586253   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.586445   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.586648   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.586667   66919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-390782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-390782/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-390782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:28:45.700737   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:45.700778   66919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:28:45.700802   66919 buildroot.go:174] setting up certificates
	I0815 01:28:45.700812   66919 provision.go:84] configureAuth start
	I0815 01:28:45.700821   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.701079   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.704006   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704384   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.704416   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704593   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.706737   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707018   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.707041   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707213   66919 provision.go:143] copyHostCerts
	I0815 01:28:45.707299   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:28:45.707324   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:28:45.707408   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:28:45.707528   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:28:45.707537   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:28:45.707576   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:28:45.707657   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:28:45.707666   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:28:45.707701   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:28:45.707771   66919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-390782 san=[127.0.0.1 192.168.50.21 localhost minikube old-k8s-version-390782]
	I0815 01:28:45.787190   66919 provision.go:177] copyRemoteCerts
	I0815 01:28:45.787256   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:28:45.787287   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.790159   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790542   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.790590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790735   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.790924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.791097   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.791217   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:45.874561   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:28:45.897869   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 01:28:45.923862   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:28:45.950038   66919 provision.go:87] duration metric: took 249.211016ms to configureAuth
	I0815 01:28:45.950065   66919 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:28:45.950301   66919 config.go:182] Loaded profile config "old-k8s-version-390782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:28:45.950412   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.953288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953746   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.953778   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953902   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.954098   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954358   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954569   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.954784   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.954953   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.954967   66919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:28:46.228321   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:28:46.228349   66919 machine.go:97] duration metric: took 882.921736ms to provisionDockerMachine
	I0815 01:28:46.228363   66919 start.go:293] postStartSetup for "old-k8s-version-390782" (driver="kvm2")
	I0815 01:28:46.228375   66919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:28:46.228401   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.228739   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:28:46.228774   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.231605   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.231993   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.232020   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.232216   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.232419   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.232698   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.232919   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.319433   66919 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:28:46.323340   66919 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:28:46.323373   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:28:46.323451   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:28:46.323555   66919 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:28:46.323658   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:28:46.332594   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:46.354889   66919 start.go:296] duration metric: took 126.511194ms for postStartSetup
	I0815 01:28:46.354930   66919 fix.go:56] duration metric: took 18.581671847s for fixHost
	I0815 01:28:46.354950   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.357987   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358251   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.358277   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358509   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.358747   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.358934   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.359092   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.359240   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:46.359425   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:46.359438   66919 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:28:46.469167   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685326.429908383
	
	I0815 01:28:46.469192   66919 fix.go:216] guest clock: 1723685326.429908383
	I0815 01:28:46.469202   66919 fix.go:229] Guest: 2024-08-15 01:28:46.429908383 +0000 UTC Remote: 2024-08-15 01:28:46.354934297 +0000 UTC m=+256.257437765 (delta=74.974086ms)
	I0815 01:28:46.469231   66919 fix.go:200] guest clock delta is within tolerance: 74.974086ms
	I0815 01:28:46.469236   66919 start.go:83] releasing machines lock for "old-k8s-version-390782", held for 18.696013068s
	I0815 01:28:46.469264   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.469527   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:46.472630   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473053   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.473082   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473746   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473931   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473998   66919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:28:46.474048   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.474159   66919 ssh_runner.go:195] Run: cat /version.json
	I0815 01:28:46.474188   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.476984   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477012   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477421   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477445   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477465   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477499   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477615   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477719   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477784   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477845   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477907   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477975   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.478048   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.585745   66919 ssh_runner.go:195] Run: systemctl --version
	I0815 01:28:46.592135   66919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:28:46.731888   66919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:28:46.739171   66919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:28:46.739238   66919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:28:46.760211   66919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:28:46.760232   66919 start.go:495] detecting cgroup driver to use...
	I0815 01:28:46.760316   66919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:28:46.778483   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:28:46.791543   66919 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:28:46.791632   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:28:46.804723   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:28:46.818794   66919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:28:46.931242   66919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:28:47.091098   66919 docker.go:233] disabling docker service ...
	I0815 01:28:47.091177   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:28:47.105150   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:28:47.117485   66919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:28:47.236287   66919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:28:47.376334   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:28:47.389397   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:28:47.406551   66919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 01:28:47.406627   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.416736   66919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:28:47.416803   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.427000   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.437833   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.449454   66919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:28:47.460229   66919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:28:47.469737   66919 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:28:47.469800   66919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:28:47.482270   66919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:28:47.491987   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:47.624462   66919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:28:47.759485   66919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:28:47.759546   66919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:28:47.764492   66919 start.go:563] Will wait 60s for crictl version
	I0815 01:28:47.764545   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:47.767890   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:28:47.814241   66919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:28:47.814342   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.842933   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.873241   66919 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 01:28:47.874283   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:47.877389   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.877763   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:47.877793   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.878008   66919 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 01:28:47.881794   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:47.893270   66919 kubeadm.go:883] updating cluster {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:28:47.893412   66919 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 01:28:47.893466   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:47.939402   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:47.939489   66919 ssh_runner.go:195] Run: which lz4
	I0815 01:28:47.943142   66919 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:28:47.947165   66919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:28:47.947191   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 01:28:49.418409   66919 crio.go:462] duration metric: took 1.475291539s to copy over tarball
	I0815 01:28:49.418479   66919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:28:46.493529   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Start
	I0815 01:28:46.493725   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring networks are active...
	I0815 01:28:46.494472   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring network default is active
	I0815 01:28:46.494805   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring network mk-embed-certs-190398 is active
	I0815 01:28:46.495206   67000 main.go:141] libmachine: (embed-certs-190398) Getting domain xml...
	I0815 01:28:46.496037   67000 main.go:141] libmachine: (embed-certs-190398) Creating domain...
	I0815 01:28:47.761636   67000 main.go:141] libmachine: (embed-certs-190398) Waiting to get IP...
	I0815 01:28:47.762736   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:47.763100   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:47.763157   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:47.763070   68098 retry.go:31] will retry after 304.161906ms: waiting for machine to come up
	I0815 01:28:48.068645   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.069177   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.069204   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.069148   68098 retry.go:31] will retry after 275.006558ms: waiting for machine to come up
	I0815 01:28:48.345793   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.346294   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.346331   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.346238   68098 retry.go:31] will retry after 325.359348ms: waiting for machine to come up
	I0815 01:28:48.673903   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.674489   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.674513   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.674447   68098 retry.go:31] will retry after 547.495848ms: waiting for machine to come up
	I0815 01:28:49.223465   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:49.224028   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:49.224062   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:49.223982   68098 retry.go:31] will retry after 471.418796ms: waiting for machine to come up
	I0815 01:28:49.696567   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:49.697064   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:49.697093   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:49.697019   68098 retry.go:31] will retry after 871.173809ms: waiting for machine to come up
	I0815 01:28:52.212767   66919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.794261663s)
	I0815 01:28:52.212795   66919 crio.go:469] duration metric: took 2.794358617s to extract the tarball
	I0815 01:28:52.212803   66919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:28:52.254542   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:52.286548   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:52.286571   66919 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:28:52.286651   66919 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.286675   66919 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 01:28:52.286687   66919 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.286684   66919 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.286704   66919 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.286645   66919 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.286672   66919 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.286649   66919 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288433   66919 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.288441   66919 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.288473   66919 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.288446   66919 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.288429   66919 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.288633   66919 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 01:28:52.526671   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 01:28:52.548397   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.556168   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.560115   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.563338   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.566306   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.576900   66919 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 01:28:52.576955   66919 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 01:28:52.576999   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.579694   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.639727   66919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 01:28:52.639778   66919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.639828   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.697299   66919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 01:28:52.697346   66919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.697397   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.709988   66919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 01:28:52.710026   66919 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 01:28:52.710051   66919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.710072   66919 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.710101   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710109   66919 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 01:28:52.710121   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710128   66919 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.710132   66919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 01:28:52.710146   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.710102   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.710159   66919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.710177   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.710159   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710198   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.768699   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.768764   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.768837   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.768892   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.768933   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.768954   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.800404   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.893131   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.893174   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.893241   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.918186   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.918203   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.918205   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.946507   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:53.037776   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:53.037991   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:53.039379   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:53.077479   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:53.077542   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 01:28:53.077559   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 01:28:53.096763   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 01:28:53.138129   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:53.153330   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 01:28:53.153366   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 01:28:53.153368   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 01:28:53.162469   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 01:28:53.292377   66919 cache_images.go:92] duration metric: took 1.005786902s to LoadCachedImages
	W0815 01:28:53.292485   66919 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0815 01:28:53.292503   66919 kubeadm.go:934] updating node { 192.168.50.21 8443 v1.20.0 crio true true} ...
	I0815 01:28:53.292682   66919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-390782 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:28:53.292781   66919 ssh_runner.go:195] Run: crio config
	I0815 01:28:53.339927   66919 cni.go:84] Creating CNI manager for ""
	I0815 01:28:53.339957   66919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:28:53.339979   66919 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:28:53.340009   66919 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.21 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-390782 NodeName:old-k8s-version-390782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 01:28:53.340183   66919 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-390782"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:28:53.340278   66919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 01:28:53.350016   66919 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:28:53.350117   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:28:53.359379   66919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 01:28:53.375719   66919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:28:53.392054   66919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 01:28:53.409122   66919 ssh_runner.go:195] Run: grep 192.168.50.21	control-plane.minikube.internal$ /etc/hosts
	I0815 01:28:53.412646   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:53.423917   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:53.560712   66919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:28:53.576488   66919 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782 for IP: 192.168.50.21
	I0815 01:28:53.576512   66919 certs.go:194] generating shared ca certs ...
	I0815 01:28:53.576530   66919 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:53.576748   66919 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:28:53.576823   66919 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:28:53.576837   66919 certs.go:256] generating profile certs ...
	I0815 01:28:53.576975   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.key
	I0815 01:28:53.577044   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key.d79afed6
	I0815 01:28:53.577113   66919 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key
	I0815 01:28:53.577274   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:28:53.577323   66919 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:28:53.577337   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:28:53.577369   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:28:53.577400   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:28:53.577431   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:28:53.577529   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:53.578239   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:28:53.622068   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:28:53.648947   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:28:53.681678   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:28:53.719636   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 01:28:53.744500   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:28:53.777941   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:28:53.810631   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:28:53.832906   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:28:53.854487   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:28:53.876448   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:28:53.898487   66919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:28:53.914102   66919 ssh_runner.go:195] Run: openssl version
	I0815 01:28:53.919563   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:28:53.929520   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933730   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933775   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.939056   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:28:53.948749   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:28:53.958451   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962624   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962669   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.967800   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:28:53.977228   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:28:53.986801   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990797   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990842   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.995930   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:28:54.005862   66919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:28:54.010115   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:28:54.015861   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:28:54.021980   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:28:54.028344   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:28:54.034172   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:28:54.040316   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:28:54.046525   66919 kubeadm.go:392] StartCluster: {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:28:54.046624   66919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:28:54.046671   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.086420   66919 cri.go:89] found id: ""
	I0815 01:28:54.086498   66919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:28:54.096425   66919 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:28:54.096449   66919 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:28:54.096500   66919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:28:54.106217   66919 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:28:54.107254   66919 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-390782" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:28:54.107872   66919 kubeconfig.go:62] /home/jenkins/minikube-integration/19443-13088/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-390782" cluster setting kubeconfig missing "old-k8s-version-390782" context setting]
	I0815 01:28:54.109790   66919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:54.140029   66919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:28:54.150180   66919 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.21
	I0815 01:28:54.150237   66919 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:28:54.150251   66919 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:28:54.150308   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.186400   66919 cri.go:89] found id: ""
	I0815 01:28:54.186485   66919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:28:54.203351   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:28:54.212828   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:28:54.212849   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:28:54.212910   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:28:54.221577   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:28:54.221641   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:28:54.230730   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:28:54.239213   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:28:54.239279   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:28:54.248268   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.256909   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:28:54.256968   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.266043   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:28:54.276366   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:28:54.276432   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:28:54.285945   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:28:54.295262   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:54.419237   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.098102   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:50.569917   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:50.570436   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:50.570465   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:50.570394   68098 retry.go:31] will retry after 775.734951ms: waiting for machine to come up
	I0815 01:28:51.347459   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:51.347917   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:51.347944   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:51.347869   68098 retry.go:31] will retry after 1.319265032s: waiting for machine to come up
	I0815 01:28:52.668564   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:52.669049   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:52.669116   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:52.669015   68098 retry.go:31] will retry after 1.765224181s: waiting for machine to come up
	I0815 01:28:54.435556   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:54.436039   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:54.436071   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:54.435975   68098 retry.go:31] will retry after 1.545076635s: waiting for machine to come up
	I0815 01:28:55.318597   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.420419   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.514727   66919 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:28:55.514825   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.015883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.515816   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.015709   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.515895   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.015127   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.515796   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.014975   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.515893   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:00.015918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:55.982693   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:55.983288   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:55.983328   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:55.983112   68098 retry.go:31] will retry after 2.788039245s: waiting for machine to come up
	I0815 01:28:58.773761   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:58.774166   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:58.774194   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:58.774087   68098 retry.go:31] will retry after 2.531335813s: waiting for machine to come up
	I0815 01:29:00.514933   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.015014   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.515780   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.015534   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.515502   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.015539   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.515643   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.015544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.515786   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:05.015882   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.309051   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:01.309593   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:29:01.309634   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:29:01.309552   68098 retry.go:31] will retry after 3.239280403s: waiting for machine to come up
	I0815 01:29:04.552370   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.552978   67000 main.go:141] libmachine: (embed-certs-190398) Found IP for machine: 192.168.72.151
	I0815 01:29:04.553002   67000 main.go:141] libmachine: (embed-certs-190398) Reserving static IP address...
	I0815 01:29:04.553047   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has current primary IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.553427   67000 main.go:141] libmachine: (embed-certs-190398) Reserved static IP address: 192.168.72.151
	I0815 01:29:04.553452   67000 main.go:141] libmachine: (embed-certs-190398) Waiting for SSH to be available...
	I0815 01:29:04.553481   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "embed-certs-190398", mac: "52:54:00:5a:91:1a", ip: "192.168.72.151"} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.553510   67000 main.go:141] libmachine: (embed-certs-190398) DBG | skip adding static IP to network mk-embed-certs-190398 - found existing host DHCP lease matching {name: "embed-certs-190398", mac: "52:54:00:5a:91:1a", ip: "192.168.72.151"}
	I0815 01:29:04.553525   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Getting to WaitForSSH function...
	I0815 01:29:04.555694   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.556036   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.556067   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.556168   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Using SSH client type: external
	I0815 01:29:04.556189   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa (-rw-------)
	I0815 01:29:04.556221   67000 main.go:141] libmachine: (embed-certs-190398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:04.556235   67000 main.go:141] libmachine: (embed-certs-190398) DBG | About to run SSH command:
	I0815 01:29:04.556252   67000 main.go:141] libmachine: (embed-certs-190398) DBG | exit 0
	I0815 01:29:04.680599   67000 main.go:141] libmachine: (embed-certs-190398) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:04.680961   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetConfigRaw
	I0815 01:29:04.681526   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:04.683847   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.684244   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.684270   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.684531   67000 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/config.json ...
	I0815 01:29:04.684755   67000 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:04.684772   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:04.684989   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.687469   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.687823   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.687848   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.687972   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.688135   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.688267   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.688389   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.688525   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.688749   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.688761   67000 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:04.788626   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:04.788670   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:04.788914   67000 buildroot.go:166] provisioning hostname "embed-certs-190398"
	I0815 01:29:04.788940   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:04.789136   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.791721   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.792153   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.792198   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.792398   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.792580   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.792756   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.792861   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.793053   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.793293   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.793312   67000 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-190398 && echo "embed-certs-190398" | sudo tee /etc/hostname
	I0815 01:29:04.910133   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-190398
	
	I0815 01:29:04.910160   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.913241   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.913666   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.913701   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.913887   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.914131   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.914336   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.914491   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.914665   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.914884   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.914909   67000 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-190398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-190398/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-190398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:05.025052   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:05.025089   67000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:05.025115   67000 buildroot.go:174] setting up certificates
	I0815 01:29:05.025127   67000 provision.go:84] configureAuth start
	I0815 01:29:05.025139   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:05.025439   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:05.028224   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.028582   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.028618   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.028753   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.030960   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.031305   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.031335   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.031524   67000 provision.go:143] copyHostCerts
	I0815 01:29:05.031598   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:05.031608   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:05.031663   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:05.031745   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:05.031752   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:05.031773   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:05.031825   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:05.031832   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:05.031849   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:05.031909   67000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.embed-certs-190398 san=[127.0.0.1 192.168.72.151 embed-certs-190398 localhost minikube]
	I0815 01:29:05.246512   67000 provision.go:177] copyRemoteCerts
	I0815 01:29:05.246567   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:05.246590   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.249286   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.249570   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.249609   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.249736   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.249933   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.250109   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.250337   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.330596   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 01:29:05.352611   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:29:05.374001   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:05.394724   67000 provision.go:87] duration metric: took 369.584008ms to configureAuth
	I0815 01:29:05.394750   67000 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:05.394917   67000 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:05.394982   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.397305   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.397620   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.397658   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.397748   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.397924   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.398039   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.398150   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.398297   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:05.398465   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:05.398486   67000 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:05.893255   67451 start.go:364] duration metric: took 3m0.662991861s to acquireMachinesLock for "default-k8s-diff-port-018537"
	I0815 01:29:05.893347   67451 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:29:05.893356   67451 fix.go:54] fixHost starting: 
	I0815 01:29:05.893803   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:05.893846   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:05.910516   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I0815 01:29:05.910882   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:05.911391   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:05.911415   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:05.911748   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:05.911959   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:05.912088   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:05.913672   67451 fix.go:112] recreateIfNeeded on default-k8s-diff-port-018537: state=Stopped err=<nil>
	I0815 01:29:05.913699   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	W0815 01:29:05.913861   67451 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:29:05.915795   67451 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-018537" ...
	I0815 01:29:05.666194   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:05.666225   67000 machine.go:97] duration metric: took 981.45738ms to provisionDockerMachine
	I0815 01:29:05.666241   67000 start.go:293] postStartSetup for "embed-certs-190398" (driver="kvm2")
	I0815 01:29:05.666253   67000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:05.666275   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.666640   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:05.666671   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.669648   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.670098   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.670124   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.670300   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.670507   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.670677   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.670835   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.750950   67000 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:05.755040   67000 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:05.755066   67000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:05.755139   67000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:05.755244   67000 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:05.755366   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:05.764271   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:05.786563   67000 start.go:296] duration metric: took 120.295403ms for postStartSetup
	I0815 01:29:05.786609   67000 fix.go:56] duration metric: took 19.317192467s for fixHost
	I0815 01:29:05.786634   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.789273   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.789677   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.789708   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.789886   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.790082   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.790244   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.790371   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.790654   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:05.790815   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:05.790826   67000 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:05.893102   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685345.869278337
	
	I0815 01:29:05.893123   67000 fix.go:216] guest clock: 1723685345.869278337
	I0815 01:29:05.893131   67000 fix.go:229] Guest: 2024-08-15 01:29:05.869278337 +0000 UTC Remote: 2024-08-15 01:29:05.786613294 +0000 UTC m=+270.290281945 (delta=82.665043ms)
	I0815 01:29:05.893159   67000 fix.go:200] guest clock delta is within tolerance: 82.665043ms
	I0815 01:29:05.893165   67000 start.go:83] releasing machines lock for "embed-certs-190398", held for 19.423784798s
	I0815 01:29:05.893192   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.893484   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:05.896152   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.896528   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.896555   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.896735   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897183   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897392   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897480   67000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:05.897536   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.897681   67000 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:05.897704   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.900443   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900543   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900814   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.900845   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900873   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.900891   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.901123   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.901150   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.901342   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.901346   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.901531   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.901531   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.901708   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.901709   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:06.008891   67000 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:06.014975   67000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:06.158062   67000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:06.164485   67000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:06.164550   67000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:06.180230   67000 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:06.180250   67000 start.go:495] detecting cgroup driver to use...
	I0815 01:29:06.180301   67000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:06.197927   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:06.210821   67000 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:06.210885   67000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:06.225614   67000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:06.239266   67000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:06.357793   67000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:06.511990   67000 docker.go:233] disabling docker service ...
	I0815 01:29:06.512061   67000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:06.529606   67000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:06.547241   67000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:06.689512   67000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:06.807041   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:06.820312   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:06.837948   67000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:06.838011   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.848233   67000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:06.848311   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.858132   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.868009   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.879629   67000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:06.893713   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.907444   67000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.928032   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.943650   67000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:06.957750   67000 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:06.957805   67000 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:06.972288   67000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:06.982187   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:07.154389   67000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:07.287847   67000 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:07.287933   67000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:07.292283   67000 start.go:563] Will wait 60s for crictl version
	I0815 01:29:07.292342   67000 ssh_runner.go:195] Run: which crictl
	I0815 01:29:07.295813   67000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:07.332788   67000 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:07.332889   67000 ssh_runner.go:195] Run: crio --version
	I0815 01:29:07.359063   67000 ssh_runner.go:195] Run: crio --version
	I0815 01:29:07.387496   67000 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:05.917276   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Start
	I0815 01:29:05.917498   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring networks are active...
	I0815 01:29:05.918269   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring network default is active
	I0815 01:29:05.918599   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring network mk-default-k8s-diff-port-018537 is active
	I0815 01:29:05.919147   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Getting domain xml...
	I0815 01:29:05.919829   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Creating domain...
	I0815 01:29:07.208213   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting to get IP...
	I0815 01:29:07.209456   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.209848   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.209933   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.209843   68264 retry.go:31] will retry after 254.654585ms: waiting for machine to come up
	I0815 01:29:07.466248   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.466679   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.466708   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.466644   68264 retry.go:31] will retry after 285.54264ms: waiting for machine to come up
	I0815 01:29:07.754037   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.754537   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.754578   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.754511   68264 retry.go:31] will retry after 336.150506ms: waiting for machine to come up
	I0815 01:29:08.091923   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.092402   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.092444   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:08.092368   68264 retry.go:31] will retry after 591.285134ms: waiting for machine to come up
	I0815 01:29:08.685380   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.685707   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.685735   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:08.685690   68264 retry.go:31] will retry after 701.709425ms: waiting for machine to come up
	I0815 01:29:09.388574   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:09.389026   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:09.389053   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:09.388979   68264 retry.go:31] will retry after 916.264423ms: waiting for machine to come up
	I0815 01:29:05.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.015647   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.514952   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.014969   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.515614   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.015757   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.515184   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.014931   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.515381   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.015761   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.389220   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:07.392416   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:07.392842   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:07.392868   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:07.393095   67000 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:07.396984   67000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:07.410153   67000 kubeadm.go:883] updating cluster {Name:embed-certs-190398 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:07.410275   67000 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:07.410348   67000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:07.447193   67000 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:07.447255   67000 ssh_runner.go:195] Run: which lz4
	I0815 01:29:07.451046   67000 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:29:07.454808   67000 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:29:07.454836   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:29:08.696070   67000 crio.go:462] duration metric: took 1.245060733s to copy over tarball
	I0815 01:29:08.696174   67000 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:29:10.306552   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:10.306969   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:10.307001   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:10.306912   68264 retry.go:31] will retry after 1.186920529s: waiting for machine to come up
	I0815 01:29:11.494832   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:11.495288   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:11.495324   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:11.495213   68264 retry.go:31] will retry after 1.049148689s: waiting for machine to come up
	I0815 01:29:12.546492   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:12.546872   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:12.546898   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:12.546844   68264 retry.go:31] will retry after 1.689384408s: waiting for machine to come up
	I0815 01:29:14.237471   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:14.238081   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:14.238134   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:14.238011   68264 retry.go:31] will retry after 1.557759414s: waiting for machine to come up
	I0815 01:29:10.515131   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.515740   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.015002   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.515169   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.015676   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.515330   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.015193   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.515742   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.015837   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.809989   67000 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.113786525s)
	I0815 01:29:10.810014   67000 crio.go:469] duration metric: took 2.113915636s to extract the tarball
	I0815 01:29:10.810021   67000 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:29:10.845484   67000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:10.886403   67000 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:29:10.886424   67000 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:29:10.886433   67000 kubeadm.go:934] updating node { 192.168.72.151 8443 v1.31.0 crio true true} ...
	I0815 01:29:10.886550   67000 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-190398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:29:10.886646   67000 ssh_runner.go:195] Run: crio config
	I0815 01:29:10.933915   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:29:10.933946   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:10.933963   67000 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:29:10.933985   67000 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-190398 NodeName:embed-certs-190398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:29:10.934114   67000 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-190398"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:29:10.934179   67000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:29:10.943778   67000 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:29:10.943839   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:29:10.952852   67000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0815 01:29:10.968026   67000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:29:10.982813   67000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0815 01:29:10.998314   67000 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I0815 01:29:11.001818   67000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:11.012933   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:11.147060   67000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:11.170825   67000 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398 for IP: 192.168.72.151
	I0815 01:29:11.170850   67000 certs.go:194] generating shared ca certs ...
	I0815 01:29:11.170871   67000 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:11.171064   67000 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:29:11.171131   67000 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:29:11.171146   67000 certs.go:256] generating profile certs ...
	I0815 01:29:11.171251   67000 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/client.key
	I0815 01:29:11.171359   67000 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.key.7cdd5698
	I0815 01:29:11.171414   67000 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.key
	I0815 01:29:11.171556   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:29:11.171593   67000 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:29:11.171602   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:29:11.171624   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:29:11.171647   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:29:11.171676   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:29:11.171730   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:11.172346   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:29:11.208182   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:29:11.236641   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:29:11.277018   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:29:11.304926   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 01:29:11.335397   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:29:11.358309   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:29:11.380632   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:29:11.403736   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:29:11.425086   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:29:11.448037   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:29:11.470461   67000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:29:11.486415   67000 ssh_runner.go:195] Run: openssl version
	I0815 01:29:11.492028   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:29:11.502925   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.507270   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.507323   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.513051   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:29:11.523911   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:29:11.534614   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.538753   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.538813   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.544194   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:29:11.554387   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:29:11.564690   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.568810   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.568873   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.575936   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:29:11.589152   67000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:29:11.594614   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:29:11.601880   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:29:11.609471   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:29:11.617010   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:29:11.623776   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:29:11.629262   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:29:11.634708   67000 kubeadm.go:392] StartCluster: {Name:embed-certs-190398 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:29:11.634821   67000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:29:11.634890   67000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:11.676483   67000 cri.go:89] found id: ""
	I0815 01:29:11.676559   67000 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:29:11.686422   67000 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:29:11.686445   67000 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:29:11.686494   67000 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:29:11.695319   67000 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:29:11.696472   67000 kubeconfig.go:125] found "embed-certs-190398" server: "https://192.168.72.151:8443"
	I0815 01:29:11.699906   67000 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:29:11.709090   67000 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.151
	I0815 01:29:11.709119   67000 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:29:11.709145   67000 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:29:11.709211   67000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:11.742710   67000 cri.go:89] found id: ""
	I0815 01:29:11.742786   67000 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:29:11.758986   67000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:29:11.768078   67000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:29:11.768100   67000 kubeadm.go:157] found existing configuration files:
	
	I0815 01:29:11.768150   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:29:11.776638   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:29:11.776724   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:29:11.785055   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:29:11.793075   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:29:11.793127   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:29:11.801516   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:29:11.809527   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:29:11.809572   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:29:11.817855   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:29:11.826084   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:29:11.826157   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:29:11.835699   67000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:29:11.844943   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:11.961226   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.030548   67000 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069293244s)
	I0815 01:29:13.030577   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.218385   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.302667   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.397530   67000 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:29:13.397630   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.898538   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.398613   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.897833   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.397759   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.798041   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:15.798467   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:15.798512   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:15.798446   68264 retry.go:31] will retry after 2.538040218s: waiting for machine to come up
	I0815 01:29:18.338522   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:18.338961   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:18.338988   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:18.338910   68264 retry.go:31] will retry after 3.121146217s: waiting for machine to come up
	I0815 01:29:15.515901   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.015290   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.514956   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.015924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.515782   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.014890   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.515482   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.515830   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.015304   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.897957   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.910962   67000 api_server.go:72] duration metric: took 2.513430323s to wait for apiserver process to appear ...
	I0815 01:29:15.910999   67000 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:29:15.911033   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.650453   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:18.650485   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:18.650498   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.686925   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:18.686951   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:18.911228   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.915391   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:18.915424   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:19.412000   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:19.419523   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:19.419562   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:19.911102   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:19.918074   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:19.918110   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:20.411662   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:20.417395   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0815 01:29:20.423058   67000 api_server.go:141] control plane version: v1.31.0
	I0815 01:29:20.423081   67000 api_server.go:131] duration metric: took 4.512072378s to wait for apiserver health ...
	I0815 01:29:20.423089   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:29:20.423095   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:20.424876   67000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:29:20.426131   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:29:20.450961   67000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:29:20.474210   67000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:29:20.486417   67000 system_pods.go:59] 8 kube-system pods found
	I0815 01:29:20.486452   67000 system_pods.go:61] "coredns-6f6b679f8f-kgklr" [5e07a5eb-5ff5-4c1c-9fc7-0a266389c235] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:29:20.486463   67000 system_pods.go:61] "etcd-embed-certs-190398" [11567f44-26c0-4cdc-81f4-d7f88eb415e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:29:20.486480   67000 system_pods.go:61] "kube-apiserver-embed-certs-190398" [da9ce1f1-705f-4b23-ace7-794d277e5d44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:29:20.486495   67000 system_pods.go:61] "kube-controller-manager-embed-certs-190398" [0a4c8153-f94c-4d24-9d2f-38e3eebd8649] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:29:20.486509   67000 system_pods.go:61] "kube-proxy-bmddn" [50e8d666-29d5-45b6-82a7-608402dfb7b1] Running
	I0815 01:29:20.486515   67000 system_pods.go:61] "kube-scheduler-embed-certs-190398" [483d04a2-16c4-4c0d-81e2-dbdfa2141981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:29:20.486520   67000 system_pods.go:61] "metrics-server-6867b74b74-sfnng" [c2088569-2e49-4ccd-bd7c-bcd454e75b1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:29:20.486528   67000 system_pods.go:61] "storage-provisioner" [ad082138-0c63-43a5-8052-5a7126a6ec77] Running
	I0815 01:29:20.486534   67000 system_pods.go:74] duration metric: took 12.306432ms to wait for pod list to return data ...
	I0815 01:29:20.486546   67000 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:29:20.489727   67000 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:29:20.489751   67000 node_conditions.go:123] node cpu capacity is 2
	I0815 01:29:20.489763   67000 node_conditions.go:105] duration metric: took 3.21035ms to run NodePressure ...
	I0815 01:29:20.489782   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:21.461547   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:21.462048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:21.462083   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:21.462013   68264 retry.go:31] will retry after 4.52196822s: waiting for machine to come up
	I0815 01:29:20.515183   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.015283   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.515686   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.015404   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.515935   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.015577   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.515114   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.015146   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.515849   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:25.014883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.750707   67000 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:29:20.766067   67000 kubeadm.go:739] kubelet initialised
	I0815 01:29:20.766089   67000 kubeadm.go:740] duration metric: took 15.355118ms waiting for restarted kubelet to initialise ...
	I0815 01:29:20.766099   67000 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:20.771715   67000 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.778596   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.778617   67000 pod_ready.go:81] duration metric: took 6.879509ms for pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.778630   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.778638   67000 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.783422   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "etcd-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.783450   67000 pod_ready.go:81] duration metric: took 4.801812ms for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.783461   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "etcd-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.783473   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.788877   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.788896   67000 pod_ready.go:81] duration metric: took 5.41319ms for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.788904   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.788909   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:22.795340   67000 pod_ready.go:102] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:25.296907   67000 pod_ready.go:102] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:27.201181   66492 start.go:364] duration metric: took 54.426048174s to acquireMachinesLock for "no-preload-884893"
	I0815 01:29:27.201235   66492 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:29:27.201317   66492 fix.go:54] fixHost starting: 
	I0815 01:29:27.201776   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:27.201818   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:27.218816   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0815 01:29:27.219223   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:27.219731   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:29:27.219754   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:27.220146   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:27.220342   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:27.220507   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:29:27.221962   66492 fix.go:112] recreateIfNeeded on no-preload-884893: state=Stopped err=<nil>
	I0815 01:29:27.221988   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	W0815 01:29:27.222177   66492 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:29:27.224523   66492 out.go:177] * Restarting existing kvm2 VM for "no-preload-884893" ...
	I0815 01:29:25.986027   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.986585   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Found IP for machine: 192.168.39.223
	I0815 01:29:25.986616   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has current primary IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.986629   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Reserving static IP address...
	I0815 01:29:25.987034   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-018537", mac: "52:54:00:ec:53:52", ip: "192.168.39.223"} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:25.987066   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | skip adding static IP to network mk-default-k8s-diff-port-018537 - found existing host DHCP lease matching {name: "default-k8s-diff-port-018537", mac: "52:54:00:ec:53:52", ip: "192.168.39.223"}
	I0815 01:29:25.987085   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Reserved static IP address: 192.168.39.223
	I0815 01:29:25.987108   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for SSH to be available...
	I0815 01:29:25.987124   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Getting to WaitForSSH function...
	I0815 01:29:25.989426   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.989800   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:25.989831   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.989937   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Using SSH client type: external
	I0815 01:29:25.989962   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa (-rw-------)
	I0815 01:29:25.990011   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:25.990026   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | About to run SSH command:
	I0815 01:29:25.990048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | exit 0
	I0815 01:29:26.121218   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:26.121655   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetConfigRaw
	I0815 01:29:26.122265   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:26.125083   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.125483   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.125513   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.125757   67451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:29:26.125978   67451 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:26.126004   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:26.126235   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.128419   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.128787   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.128814   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.128963   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.129124   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.129274   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.129420   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.129603   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.129828   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.129843   67451 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:26.236866   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:26.236900   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.237136   67451 buildroot.go:166] provisioning hostname "default-k8s-diff-port-018537"
	I0815 01:29:26.237158   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.237334   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.240243   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.240760   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.240791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.240959   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.241203   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.241415   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.241581   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.241741   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.241903   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.241916   67451 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-018537 && echo "default-k8s-diff-port-018537" | sudo tee /etc/hostname
	I0815 01:29:26.358127   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-018537
	
	I0815 01:29:26.358159   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.361276   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.361664   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.361694   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.361841   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.362013   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.362191   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.362368   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.362517   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.362704   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.362729   67451 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-018537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-018537/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-018537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:26.479326   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:26.479357   67451 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:26.479398   67451 buildroot.go:174] setting up certificates
	I0815 01:29:26.479411   67451 provision.go:84] configureAuth start
	I0815 01:29:26.479440   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.479791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:26.482464   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.482845   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.482873   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.483023   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.485502   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.485960   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.485995   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.486135   67451 provision.go:143] copyHostCerts
	I0815 01:29:26.486194   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:26.486214   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:26.486273   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:26.486384   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:26.486394   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:26.486419   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:26.486480   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:26.486487   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:26.486508   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:26.486573   67451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-018537 san=[127.0.0.1 192.168.39.223 default-k8s-diff-port-018537 localhost minikube]
	I0815 01:29:26.563251   67451 provision.go:177] copyRemoteCerts
	I0815 01:29:26.563309   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:26.563337   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.566141   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.566481   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.566506   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.566737   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.566947   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.567087   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.567208   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:26.650593   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:26.673166   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0815 01:29:26.695563   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:29:26.717169   67451 provision.go:87] duration metric: took 237.742408ms to configureAuth
	I0815 01:29:26.717198   67451 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:26.717373   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:26.717453   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.720247   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.720620   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.720648   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.720815   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.721007   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.721176   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.721302   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.721484   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.721663   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.721681   67451 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:26.972647   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:26.972691   67451 machine.go:97] duration metric: took 846.694776ms to provisionDockerMachine
	I0815 01:29:26.972706   67451 start.go:293] postStartSetup for "default-k8s-diff-port-018537" (driver="kvm2")
	I0815 01:29:26.972716   67451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:26.972731   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:26.973032   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:26.973053   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.975828   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.976300   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.976334   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.976531   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.976827   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.976999   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.977111   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.059130   67451 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:27.062867   67451 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:27.062893   67451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:27.062954   67451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:27.063024   67451 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:27.063119   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:27.072111   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:27.093976   67451 start.go:296] duration metric: took 121.256938ms for postStartSetup
	I0815 01:29:27.094023   67451 fix.go:56] duration metric: took 21.200666941s for fixHost
	I0815 01:29:27.094048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.096548   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.096881   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.096912   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.097059   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.097238   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.097400   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.097511   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.097664   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:27.097842   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:27.097858   67451 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:27.201028   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685367.180566854
	
	I0815 01:29:27.201053   67451 fix.go:216] guest clock: 1723685367.180566854
	I0815 01:29:27.201062   67451 fix.go:229] Guest: 2024-08-15 01:29:27.180566854 +0000 UTC Remote: 2024-08-15 01:29:27.094027897 +0000 UTC m=+201.997769057 (delta=86.538957ms)
	I0815 01:29:27.201100   67451 fix.go:200] guest clock delta is within tolerance: 86.538957ms
	I0815 01:29:27.201107   67451 start.go:83] releasing machines lock for "default-k8s-diff-port-018537", held for 21.307794339s
	I0815 01:29:27.201135   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.201522   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:27.204278   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.204674   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.204703   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.204934   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205501   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205713   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205800   67451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:27.205849   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.206127   67451 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:27.206149   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.208688   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.208858   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209066   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.209092   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209394   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.209551   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.209552   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.209584   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209741   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.209748   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.209952   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.210001   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.210090   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.210256   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.293417   67451 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:27.329491   67451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:27.473782   67451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:27.480357   67451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:27.480432   67451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:27.499552   67451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:27.499582   67451 start.go:495] detecting cgroup driver to use...
	I0815 01:29:27.499650   67451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:27.515626   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:27.534025   67451 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:27.534098   67451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:27.547536   67451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:27.561135   67451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:27.672622   67451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:27.832133   67451 docker.go:233] disabling docker service ...
	I0815 01:29:27.832210   67451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:27.845647   67451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:27.858233   67451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:27.985504   67451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:28.119036   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:28.133844   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:28.151116   67451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:28.151188   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.162173   67451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:28.162250   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.171954   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.182363   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.192943   67451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:28.203684   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.214360   67451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.230572   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.241283   67451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:28.250743   67451 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:28.250804   67451 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:28.263655   67451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:28.273663   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:28.408232   67451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:28.558860   67451 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:28.558933   67451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:28.564390   67451 start.go:563] Will wait 60s for crictl version
	I0815 01:29:28.564508   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:29:28.568351   67451 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:28.616006   67451 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:28.616094   67451 ssh_runner.go:195] Run: crio --version
	I0815 01:29:28.642621   67451 ssh_runner.go:195] Run: crio --version
	I0815 01:29:28.671150   67451 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:28.672626   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:28.675626   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:28.676004   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:28.676038   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:28.676296   67451 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:28.680836   67451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:28.694402   67451 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:28.694519   67451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:28.694574   67451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:28.730337   67451 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:28.730401   67451 ssh_runner.go:195] Run: which lz4
	I0815 01:29:28.734226   67451 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:29:28.738162   67451 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:29:28.738185   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:29:30.016492   67451 crio.go:462] duration metric: took 1.282301387s to copy over tarball
	I0815 01:29:30.016571   67451 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:29:25.515881   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.015741   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.515122   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.014889   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.515108   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.015604   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.515658   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.015319   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.515225   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.015561   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.225775   66492 main.go:141] libmachine: (no-preload-884893) Calling .Start
	I0815 01:29:27.225974   66492 main.go:141] libmachine: (no-preload-884893) Ensuring networks are active...
	I0815 01:29:27.226702   66492 main.go:141] libmachine: (no-preload-884893) Ensuring network default is active
	I0815 01:29:27.227078   66492 main.go:141] libmachine: (no-preload-884893) Ensuring network mk-no-preload-884893 is active
	I0815 01:29:27.227577   66492 main.go:141] libmachine: (no-preload-884893) Getting domain xml...
	I0815 01:29:27.228376   66492 main.go:141] libmachine: (no-preload-884893) Creating domain...
	I0815 01:29:28.609215   66492 main.go:141] libmachine: (no-preload-884893) Waiting to get IP...
	I0815 01:29:28.610043   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:28.610440   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:28.610487   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:28.610415   68431 retry.go:31] will retry after 305.851347ms: waiting for machine to come up
	I0815 01:29:28.918245   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:28.918747   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:28.918770   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:28.918720   68431 retry.go:31] will retry after 368.045549ms: waiting for machine to come up
	I0815 01:29:29.288313   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:29.289013   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:29.289046   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:29.288958   68431 retry.go:31] will retry after 415.68441ms: waiting for machine to come up
	I0815 01:29:29.706767   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:29.707226   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:29.707249   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:29.707180   68431 retry.go:31] will retry after 575.538038ms: waiting for machine to come up
	I0815 01:29:26.795064   67000 pod_ready.go:92] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:26.795085   67000 pod_ready.go:81] duration metric: took 6.006168181s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.795096   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bmddn" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.799159   67000 pod_ready.go:92] pod "kube-proxy-bmddn" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:26.799176   67000 pod_ready.go:81] duration metric: took 4.074526ms for pod "kube-proxy-bmddn" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.799184   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:28.805591   67000 pod_ready.go:102] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:30.306235   67000 pod_ready.go:92] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:30.306262   67000 pod_ready.go:81] duration metric: took 3.507070811s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:30.306273   67000 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:32.131219   67451 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.114619197s)
	I0815 01:29:32.131242   67451 crio.go:469] duration metric: took 2.114723577s to extract the tarball
	I0815 01:29:32.131249   67451 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:29:32.169830   67451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:32.217116   67451 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:29:32.217139   67451 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:29:32.217146   67451 kubeadm.go:934] updating node { 192.168.39.223 8444 v1.31.0 crio true true} ...
	I0815 01:29:32.217245   67451 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-018537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:29:32.217305   67451 ssh_runner.go:195] Run: crio config
	I0815 01:29:32.272237   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:29:32.272257   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:32.272270   67451 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:29:32.272292   67451 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.223 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-018537 NodeName:default-k8s-diff-port-018537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:29:32.272435   67451 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.223
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-018537"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:29:32.272486   67451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:29:32.282454   67451 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:29:32.282510   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:29:32.291448   67451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0815 01:29:32.307026   67451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:29:32.324183   67451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0815 01:29:32.339298   67451 ssh_runner.go:195] Run: grep 192.168.39.223	control-plane.minikube.internal$ /etc/hosts
	I0815 01:29:32.342644   67451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:32.353518   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:32.468014   67451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:32.484049   67451 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537 for IP: 192.168.39.223
	I0815 01:29:32.484075   67451 certs.go:194] generating shared ca certs ...
	I0815 01:29:32.484097   67451 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:32.484263   67451 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:29:32.484313   67451 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:29:32.484326   67451 certs.go:256] generating profile certs ...
	I0815 01:29:32.484436   67451 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.key
	I0815 01:29:32.484511   67451 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.key.141a85fa
	I0815 01:29:32.484564   67451 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.key
	I0815 01:29:32.484747   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:29:32.484787   67451 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:29:32.484797   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:29:32.484828   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:29:32.484869   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:29:32.484896   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:29:32.484953   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:32.485741   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:29:32.521657   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:29:32.556226   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:29:32.585724   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:29:32.619588   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 01:29:32.649821   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:29:32.677343   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:29:32.699622   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:29:32.721142   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:29:32.742388   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:29:32.766476   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:29:32.788341   67451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:29:32.803728   67451 ssh_runner.go:195] Run: openssl version
	I0815 01:29:32.809178   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:29:32.819091   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.823068   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.823119   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.828361   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:29:32.837721   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:29:32.847217   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.851176   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.851220   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.856303   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:29:32.865672   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:29:32.875695   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.879910   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.879961   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.885240   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:29:32.894951   67451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:29:32.899131   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:29:32.904465   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:29:32.910243   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:29:32.915874   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:29:32.921193   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:29:32.926569   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:29:32.931905   67451 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:29:32.932015   67451 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:29:32.932095   67451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:32.967184   67451 cri.go:89] found id: ""
	I0815 01:29:32.967270   67451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:29:32.977083   67451 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:29:32.977105   67451 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:29:32.977146   67451 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:29:32.986934   67451 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:29:32.988393   67451 kubeconfig.go:125] found "default-k8s-diff-port-018537" server: "https://192.168.39.223:8444"
	I0815 01:29:32.991478   67451 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:29:33.000175   67451 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.223
	I0815 01:29:33.000201   67451 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:29:33.000211   67451 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:29:33.000260   67451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:33.042092   67451 cri.go:89] found id: ""
	I0815 01:29:33.042173   67451 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:29:33.058312   67451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:29:33.067931   67451 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:29:33.067951   67451 kubeadm.go:157] found existing configuration files:
	
	I0815 01:29:33.068005   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0815 01:29:33.076467   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:29:33.076532   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:29:33.085318   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0815 01:29:33.093657   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:29:33.093710   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:29:33.102263   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0815 01:29:33.110120   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:29:33.110166   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:29:33.118497   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0815 01:29:33.126969   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:29:33.127017   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:29:33.135332   67451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:29:33.143869   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:33.257728   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.000703   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.223362   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.296248   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.400251   67451 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:29:34.400365   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.901010   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.515518   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.015099   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.514899   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.015422   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.515483   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.015471   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.515843   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.015059   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.514953   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.015692   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.283919   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:30.284357   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:30.284387   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:30.284314   68431 retry.go:31] will retry after 737.00152ms: waiting for machine to come up
	I0815 01:29:31.023083   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:31.023593   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:31.023620   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:31.023541   68431 retry.go:31] will retry after 851.229647ms: waiting for machine to come up
	I0815 01:29:31.876610   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:31.877022   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:31.877051   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:31.876972   68431 retry.go:31] will retry after 914.072719ms: waiting for machine to come up
	I0815 01:29:32.792245   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:32.792723   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:32.792749   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:32.792674   68431 retry.go:31] will retry after 1.383936582s: waiting for machine to come up
	I0815 01:29:34.178425   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:34.178889   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:34.178928   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:34.178825   68431 retry.go:31] will retry after 1.574004296s: waiting for machine to come up
	I0815 01:29:32.314820   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:34.812868   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:35.400782   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.900844   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.400575   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.900769   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.916400   67451 api_server.go:72] duration metric: took 2.516148893s to wait for apiserver process to appear ...
	I0815 01:29:36.916432   67451 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:29:36.916458   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.650207   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:39.650234   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:39.650246   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.704636   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:39.704687   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:39.917074   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.921711   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:39.921742   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:35.514869   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.015361   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.515461   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.015560   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.514995   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.015431   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.515382   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.014971   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.515702   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:40.015185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.754518   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:35.755025   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:35.755049   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:35.754951   68431 retry.go:31] will retry after 1.763026338s: waiting for machine to come up
	I0815 01:29:37.519406   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:37.519910   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:37.519940   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:37.519857   68431 retry.go:31] will retry after 1.953484546s: waiting for machine to come up
	I0815 01:29:39.475118   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:39.475481   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:39.475617   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:39.475446   68431 retry.go:31] will retry after 3.524055081s: waiting for machine to come up
	I0815 01:29:36.813811   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:39.312364   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:40.417362   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:40.421758   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:40.421793   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:40.917290   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:40.929914   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:40.929979   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:41.417095   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:41.422436   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 200:
	ok
	I0815 01:29:41.430162   67451 api_server.go:141] control plane version: v1.31.0
	I0815 01:29:41.430190   67451 api_server.go:131] duration metric: took 4.513750685s to wait for apiserver health ...
	I0815 01:29:41.430201   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:29:41.430210   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:41.432041   67451 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:29:41.433158   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:29:41.465502   67451 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:29:41.488013   67451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:29:41.500034   67451 system_pods.go:59] 8 kube-system pods found
	I0815 01:29:41.500063   67451 system_pods.go:61] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:29:41.500071   67451 system_pods.go:61] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:29:41.500087   67451 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:29:41.500098   67451 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:29:41.500102   67451 system_pods.go:61] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:29:41.500107   67451 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:29:41.500117   67451 system_pods.go:61] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:29:41.500120   67451 system_pods.go:61] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:29:41.500126   67451 system_pods.go:74] duration metric: took 12.091408ms to wait for pod list to return data ...
	I0815 01:29:41.500137   67451 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:29:41.505113   67451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:29:41.505137   67451 node_conditions.go:123] node cpu capacity is 2
	I0815 01:29:41.505154   67451 node_conditions.go:105] duration metric: took 5.005028ms to run NodePressure ...
	I0815 01:29:41.505170   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:41.761818   67451 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:29:41.767941   67451 kubeadm.go:739] kubelet initialised
	I0815 01:29:41.767972   67451 kubeadm.go:740] duration metric: took 6.119306ms waiting for restarted kubelet to initialise ...
	I0815 01:29:41.767980   67451 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:41.774714   67451 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.782833   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.782861   67451 pod_ready.go:81] duration metric: took 8.124705ms for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.782870   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.782877   67451 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.790225   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.790248   67451 pod_ready.go:81] duration metric: took 7.36386ms for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.790259   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.790265   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.797569   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.797592   67451 pod_ready.go:81] duration metric: took 7.320672ms for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.797605   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.797611   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.891391   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.891423   67451 pod_ready.go:81] duration metric: took 93.801865ms for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.891435   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.891442   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:42.291752   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-proxy-s8mfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.291780   67451 pod_ready.go:81] duration metric: took 400.332851ms for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:42.291789   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-proxy-s8mfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.291795   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:42.691923   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.691958   67451 pod_ready.go:81] duration metric: took 400.15227ms for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:42.691970   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.691977   67451 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:43.091932   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:43.091958   67451 pod_ready.go:81] duration metric: took 399.974795ms for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:43.091970   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:43.091976   67451 pod_ready.go:38] duration metric: took 1.323989077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:43.091990   67451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:29:43.103131   67451 ops.go:34] apiserver oom_adj: -16
	I0815 01:29:43.103155   67451 kubeadm.go:597] duration metric: took 10.126043167s to restartPrimaryControlPlane
	I0815 01:29:43.103165   67451 kubeadm.go:394] duration metric: took 10.171275892s to StartCluster
	I0815 01:29:43.103183   67451 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:43.103269   67451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:29:43.105655   67451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:43.105963   67451 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:29:43.106027   67451 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:29:43.106123   67451 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106142   67451 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106162   67451 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.106178   67451 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:29:43.106187   67451 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106200   67451 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-018537"
	I0815 01:29:43.106226   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.106255   67451 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.106274   67451 addons.go:243] addon metrics-server should already be in state true
	I0815 01:29:43.106203   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:43.106363   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.106702   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106731   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.106708   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106789   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106822   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.106963   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.107834   67451 out.go:177] * Verifying Kubernetes components...
	I0815 01:29:43.109186   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:43.127122   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46271
	I0815 01:29:43.127378   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I0815 01:29:43.127380   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I0815 01:29:43.127678   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.127791   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.128078   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.128296   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.128323   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.128466   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.128480   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.128671   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.128844   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.129231   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.129263   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.129768   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.129817   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.130089   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.130125   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.130219   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.130448   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.134347   67451 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.134366   67451 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:29:43.134394   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.134764   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.134801   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.148352   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0815 01:29:43.148713   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0815 01:29:43.148786   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.149196   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.149378   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.149420   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.149838   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.149863   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.149891   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.150092   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.150344   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.150698   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.152063   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.152848   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.154165   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0815 01:29:43.154664   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.155020   67451 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:43.155087   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.155110   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.155596   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.156124   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.156166   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.156340   67451 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:29:43.156366   67451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:29:43.156389   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.157988   67451 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:29:43.159283   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:29:43.159299   67451 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:29:43.159319   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.159668   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.160304   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.160373   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.160866   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.161069   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.161234   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.161395   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.162257   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.162673   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.162702   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.162838   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.163007   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.163179   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.163296   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.175175   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0815 01:29:43.175674   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.176169   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.176193   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.176566   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.176824   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.178342   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.178584   67451 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:29:43.178597   67451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:29:43.178615   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.181058   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.181448   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.181482   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.181577   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.181709   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.181791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.181873   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.318078   67451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:43.341037   67451 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-018537" to be "Ready" ...
	I0815 01:29:43.400964   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:29:43.400993   67451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:29:43.423693   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:29:43.423716   67451 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:29:43.430460   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:29:43.453562   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:29:43.453587   67451 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:29:43.457038   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:29:43.495707   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:29:44.708047   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.25097545s)
	I0815 01:29:44.708106   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708111   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.212373458s)
	I0815 01:29:44.708119   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708129   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708141   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708135   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.277646183s)
	I0815 01:29:44.708182   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708201   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708391   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708409   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708419   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708428   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708531   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.708562   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708568   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708577   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.708586   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708587   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708599   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708605   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708613   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708648   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708614   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708678   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.710192   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.710210   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.710220   67451 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-018537"
	I0815 01:29:44.710196   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.710447   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.710467   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.716452   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.716468   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.716716   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.716737   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.718650   67451 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0815 01:29:44.719796   67451 addons.go:510] duration metric: took 1.613772622s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0815 01:29:40.514981   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.015724   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.515316   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.515738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.515747   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.015794   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:45.015384   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.000581   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:43.001092   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:43.001116   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:43.001045   68431 retry.go:31] will retry after 4.175502286s: waiting for machine to come up
	I0815 01:29:41.313801   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:43.814135   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:47.178102   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.178637   66492 main.go:141] libmachine: (no-preload-884893) Found IP for machine: 192.168.61.166
	I0815 01:29:47.178665   66492 main.go:141] libmachine: (no-preload-884893) Reserving static IP address...
	I0815 01:29:47.178678   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has current primary IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.179108   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "no-preload-884893", mac: "52:54:00:b7:93:c6", ip: "192.168.61.166"} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.179151   66492 main.go:141] libmachine: (no-preload-884893) DBG | skip adding static IP to network mk-no-preload-884893 - found existing host DHCP lease matching {name: "no-preload-884893", mac: "52:54:00:b7:93:c6", ip: "192.168.61.166"}
	I0815 01:29:47.179169   66492 main.go:141] libmachine: (no-preload-884893) Reserved static IP address: 192.168.61.166
	I0815 01:29:47.179188   66492 main.go:141] libmachine: (no-preload-884893) Waiting for SSH to be available...
	I0815 01:29:47.179204   66492 main.go:141] libmachine: (no-preload-884893) DBG | Getting to WaitForSSH function...
	I0815 01:29:47.181522   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.181909   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.181937   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.182038   66492 main.go:141] libmachine: (no-preload-884893) DBG | Using SSH client type: external
	I0815 01:29:47.182070   66492 main.go:141] libmachine: (no-preload-884893) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa (-rw-------)
	I0815 01:29:47.182105   66492 main.go:141] libmachine: (no-preload-884893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.166 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:47.182126   66492 main.go:141] libmachine: (no-preload-884893) DBG | About to run SSH command:
	I0815 01:29:47.182156   66492 main.go:141] libmachine: (no-preload-884893) DBG | exit 0
	I0815 01:29:47.309068   66492 main.go:141] libmachine: (no-preload-884893) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:47.309492   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetConfigRaw
	I0815 01:29:47.310181   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:47.312956   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.313296   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.313327   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.313503   66492 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/config.json ...
	I0815 01:29:47.313720   66492 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:47.313742   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:47.313965   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.315987   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.316252   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.316278   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.316399   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.316555   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.316741   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.316886   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.317071   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.317250   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.317263   66492 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:47.424862   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:47.424894   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.425125   66492 buildroot.go:166] provisioning hostname "no-preload-884893"
	I0815 01:29:47.425156   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.425353   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.428397   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.428802   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.428825   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.429003   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.429185   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.429336   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.429464   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.429650   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.429863   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.429881   66492 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-884893 && echo "no-preload-884893" | sudo tee /etc/hostname
	I0815 01:29:47.552134   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-884893
	
	I0815 01:29:47.552159   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.554997   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.555458   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.555500   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.555742   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.555975   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.556148   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.556320   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.556525   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.556707   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.556733   66492 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-884893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-884893/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-884893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:47.673572   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:47.673608   66492 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:47.673637   66492 buildroot.go:174] setting up certificates
	I0815 01:29:47.673653   66492 provision.go:84] configureAuth start
	I0815 01:29:47.673670   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.674016   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:47.677054   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.677491   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.677526   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.677588   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.680115   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.680510   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.680539   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.680719   66492 provision.go:143] copyHostCerts
	I0815 01:29:47.680772   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:47.680789   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:47.680846   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:47.680962   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:47.680970   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:47.680992   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:47.681057   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:47.681064   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:47.681081   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:47.681129   66492 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.no-preload-884893 san=[127.0.0.1 192.168.61.166 localhost minikube no-preload-884893]
	I0815 01:29:47.828342   66492 provision.go:177] copyRemoteCerts
	I0815 01:29:47.828395   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:47.828416   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.831163   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.831546   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.831576   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.831760   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.831948   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.832109   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.832218   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:47.914745   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:47.938252   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 01:29:47.960492   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:29:47.982681   66492 provision.go:87] duration metric: took 309.010268ms to configureAuth
	I0815 01:29:47.982714   66492 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:47.982971   66492 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:47.983095   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.985798   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.986181   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.986213   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.986383   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.986584   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.986748   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.986935   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.987115   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.987328   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.987346   66492 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:48.264004   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:48.264027   66492 machine.go:97] duration metric: took 950.293757ms to provisionDockerMachine
	I0815 01:29:48.264037   66492 start.go:293] postStartSetup for "no-preload-884893" (driver="kvm2")
	I0815 01:29:48.264047   66492 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:48.264060   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.264375   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:48.264401   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.267376   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.267859   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.267888   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.268115   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.268334   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.268521   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.268713   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.351688   66492 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:48.356871   66492 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:48.356897   66492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:48.356977   66492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:48.357078   66492 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:48.357194   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:48.369590   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:48.397339   66492 start.go:296] duration metric: took 133.287217ms for postStartSetup
	I0815 01:29:48.397389   66492 fix.go:56] duration metric: took 21.196078137s for fixHost
	I0815 01:29:48.397434   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.400353   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.400792   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.400831   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.401118   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.401352   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.401509   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.401707   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.401914   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:48.402132   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:48.402148   66492 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:48.518704   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685388.495787154
	
	I0815 01:29:48.518731   66492 fix.go:216] guest clock: 1723685388.495787154
	I0815 01:29:48.518743   66492 fix.go:229] Guest: 2024-08-15 01:29:48.495787154 +0000 UTC Remote: 2024-08-15 01:29:48.397394567 +0000 UTC m=+358.213942436 (delta=98.392587ms)
	I0815 01:29:48.518771   66492 fix.go:200] guest clock delta is within tolerance: 98.392587ms
	I0815 01:29:48.518779   66492 start.go:83] releasing machines lock for "no-preload-884893", held for 21.317569669s
	I0815 01:29:48.518808   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.519146   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:48.522001   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.522428   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.522461   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.522626   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523145   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523490   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523580   66492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:48.523634   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.523747   66492 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:48.523768   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.527031   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527128   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527408   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.527473   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527563   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.527592   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527709   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.527781   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.527943   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.528173   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.528177   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.528305   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.528417   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.528598   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.610614   66492 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:48.647464   66492 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:48.786666   66492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:48.792525   66492 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:48.792593   66492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:48.807904   66492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:48.807924   66492 start.go:495] detecting cgroup driver to use...
	I0815 01:29:48.807975   66492 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:48.826113   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:48.839376   66492 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:48.839443   66492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:48.852840   66492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:48.866029   66492 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:48.974628   66492 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:49.141375   66492 docker.go:233] disabling docker service ...
	I0815 01:29:49.141447   66492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:49.155650   66492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:49.168527   66492 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:49.295756   66492 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:49.430096   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:49.443508   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:49.460504   66492 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:49.460567   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.470309   66492 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:49.470376   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.480340   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.490326   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.500831   66492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:49.511629   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.522350   66492 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.541871   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.553334   66492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:49.562756   66492 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:49.562817   66492 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:49.575907   66492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:49.586017   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:49.709089   66492 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:49.848506   66492 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:49.848599   66492 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:49.853379   66492 start.go:563] Will wait 60s for crictl version
	I0815 01:29:49.853442   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:49.857695   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:49.897829   66492 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:49.897909   66492 ssh_runner.go:195] Run: crio --version
	I0815 01:29:49.927253   66492 ssh_runner.go:195] Run: crio --version
	I0815 01:29:49.956689   66492 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:45.345209   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:47.844877   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:49.845546   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:45.515828   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.015564   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.515829   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.014916   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.515308   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.014871   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.515182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.015946   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.514892   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.015788   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.957823   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:49.960376   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:49.960741   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:49.960771   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:49.960975   66492 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:49.964703   66492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:49.975918   66492 kubeadm.go:883] updating cluster {Name:no-preload-884893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:49.976078   66492 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:49.976130   66492 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:50.007973   66492 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:50.007997   66492 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:29:50.008034   66492 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:50.008076   66492 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.008092   66492 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.008147   66492 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 01:29:50.008167   66492 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.008238   66492 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.008261   66492 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.008535   66492 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.009666   66492 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.009734   66492 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.009745   66492 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:50.009748   66492 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.009734   66492 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.009768   66492 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.009775   66492 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.009801   66492 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 01:29:46.312368   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:48.312568   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.313249   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.347683   67451 node_ready.go:49] node "default-k8s-diff-port-018537" has status "Ready":"True"
	I0815 01:29:50.347704   67451 node_ready.go:38] duration metric: took 7.006638337s for node "default-k8s-diff-port-018537" to be "Ready" ...
	I0815 01:29:50.347713   67451 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:50.358505   67451 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.364110   67451 pod_ready.go:92] pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.364139   67451 pod_ready.go:81] duration metric: took 5.600464ms for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.364150   67451 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.370186   67451 pod_ready.go:92] pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.370212   67451 pod_ready.go:81] duration metric: took 6.054189ms for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.370223   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.380051   67451 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.380089   67451 pod_ready.go:81] duration metric: took 9.848463ms for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.380107   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.385988   67451 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.386015   67451 pod_ready.go:81] duration metric: took 2.005899675s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.386027   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.390635   67451 pod_ready.go:92] pod "kube-proxy-s8mfb" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.390654   67451 pod_ready.go:81] duration metric: took 4.620554ms for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.390663   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.745424   67451 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.745447   67451 pod_ready.go:81] duration metric: took 354.777631ms for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.745458   67451 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:54.752243   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.515037   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.015346   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.514948   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.015826   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.514876   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.015522   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.515665   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.015480   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.515202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:55.014921   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.224358   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.237723   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0815 01:29:50.240904   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.273259   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.275978   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.277287   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.293030   66492 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0815 01:29:50.293078   66492 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.293135   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.293169   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.425265   66492 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0815 01:29:50.425285   66492 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0815 01:29:50.425307   66492 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0815 01:29:50.425319   66492 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.425319   66492 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.425326   66492 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.425367   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425374   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425375   66492 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0815 01:29:50.425390   66492 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.425415   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425409   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425427   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.425436   66492 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0815 01:29:50.425451   66492 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.425471   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.438767   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.438827   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.477250   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.477290   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.477347   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.477399   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.507338   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.527412   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.618767   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.623557   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.623650   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.623741   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.623773   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.668092   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.738811   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.747865   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 01:29:50.747932   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.747953   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 01:29:50.747983   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.748016   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:50.748026   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.777047   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 01:29:50.777152   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:50.811559   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 01:29:50.811678   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:50.829106   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0815 01:29:50.829115   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0815 01:29:50.829131   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.829161   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0815 01:29:50.829178   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.829206   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:29:50.829276   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 01:29:50.829287   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0815 01:29:50.829319   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0815 01:29:50.829360   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:50.833595   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0815 01:29:50.869008   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.899406   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.070205124s)
	I0815 01:29:52.899446   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0815 01:29:52.899444   66492 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0: (2.070218931s)
	I0815 01:29:52.899466   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:52.899475   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0815 01:29:52.899477   66492 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.03044186s)
	I0815 01:29:52.899510   66492 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0815 01:29:52.899516   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:52.899534   66492 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.899573   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:54.750498   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.850957835s)
	I0815 01:29:54.750533   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0815 01:29:54.750530   66492 ssh_runner.go:235] Completed: which crictl: (1.850936309s)
	I0815 01:29:54.750567   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:54.750593   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:54.750609   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:54.787342   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.314561   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:54.813265   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:56.752530   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:58.752625   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:55.515921   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:55.516020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:55.556467   66919 cri.go:89] found id: ""
	I0815 01:29:55.556495   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.556506   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:55.556514   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:55.556584   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:55.591203   66919 cri.go:89] found id: ""
	I0815 01:29:55.591227   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.591234   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:55.591240   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:55.591319   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:55.628819   66919 cri.go:89] found id: ""
	I0815 01:29:55.628847   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.628858   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:55.628865   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:55.628934   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:55.673750   66919 cri.go:89] found id: ""
	I0815 01:29:55.673779   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.673790   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:55.673798   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:55.673857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:55.717121   66919 cri.go:89] found id: ""
	I0815 01:29:55.717153   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.717164   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:55.717171   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:55.717233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:55.753387   66919 cri.go:89] found id: ""
	I0815 01:29:55.753415   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.753425   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:55.753434   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:55.753507   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:55.787148   66919 cri.go:89] found id: ""
	I0815 01:29:55.787183   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.787194   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:55.787207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:55.787272   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:55.820172   66919 cri.go:89] found id: ""
	I0815 01:29:55.820212   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.820226   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:55.820238   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:55.820260   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:55.869089   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:55.869120   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:55.882614   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:55.882644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:56.004286   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:56.004364   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:56.004382   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:56.077836   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:56.077873   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:58.628976   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:58.642997   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:58.643074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:58.675870   66919 cri.go:89] found id: ""
	I0815 01:29:58.675906   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.675916   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:58.675921   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:58.675971   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:58.708231   66919 cri.go:89] found id: ""
	I0815 01:29:58.708263   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.708271   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:58.708277   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:58.708347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:58.744121   66919 cri.go:89] found id: ""
	I0815 01:29:58.744151   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.744162   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:58.744169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:58.744231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:58.783191   66919 cri.go:89] found id: ""
	I0815 01:29:58.783225   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.783238   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:58.783246   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:58.783315   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:58.821747   66919 cri.go:89] found id: ""
	I0815 01:29:58.821775   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.821785   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:58.821801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:58.821865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:58.859419   66919 cri.go:89] found id: ""
	I0815 01:29:58.859450   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.859458   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:58.859463   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:58.859520   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:58.900959   66919 cri.go:89] found id: ""
	I0815 01:29:58.900988   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.900999   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:58.901006   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:58.901069   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:58.940714   66919 cri.go:89] found id: ""
	I0815 01:29:58.940746   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.940758   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:58.940779   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:58.940796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:58.956973   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:58.957004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:59.024399   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:59.024426   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:59.024439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:59.106170   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:59.106210   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:59.142151   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:59.142181   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:56.948465   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.1978264s)
	I0815 01:29:56.948496   66492 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.161116111s)
	I0815 01:29:56.948602   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:56.948503   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0815 01:29:56.948644   66492 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:56.948718   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:56.985210   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 01:29:56.985331   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:29:58.731174   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.782427987s)
	I0815 01:29:58.731211   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0815 01:29:58.731234   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:58.731284   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:58.731184   66492 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.745828896s)
	I0815 01:29:58.731343   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0815 01:29:57.313743   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:59.814068   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:00.752802   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:02.752939   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:01.696371   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:01.709675   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:01.709748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:01.747907   66919 cri.go:89] found id: ""
	I0815 01:30:01.747934   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.747941   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:01.747949   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:01.748009   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:01.785404   66919 cri.go:89] found id: ""
	I0815 01:30:01.785429   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.785437   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:01.785442   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:01.785499   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:01.820032   66919 cri.go:89] found id: ""
	I0815 01:30:01.820060   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.820068   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:01.820073   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:01.820134   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:01.853219   66919 cri.go:89] found id: ""
	I0815 01:30:01.853257   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.853268   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:01.853276   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:01.853331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:01.895875   66919 cri.go:89] found id: ""
	I0815 01:30:01.895903   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.895915   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:01.895922   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:01.895983   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:01.929753   66919 cri.go:89] found id: ""
	I0815 01:30:01.929785   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.929796   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:01.929803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:01.929865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:01.961053   66919 cri.go:89] found id: ""
	I0815 01:30:01.961087   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.961099   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:01.961107   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:01.961174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:01.993217   66919 cri.go:89] found id: ""
	I0815 01:30:01.993247   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.993258   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:01.993268   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:01.993287   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:02.051367   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:02.051400   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:02.065818   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:02.065851   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:02.150692   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:02.150721   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:02.150738   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:02.262369   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:02.262406   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:04.813873   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:04.829471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:04.829549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:04.871020   66919 cri.go:89] found id: ""
	I0815 01:30:04.871049   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.871058   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:04.871064   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:04.871131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:04.924432   66919 cri.go:89] found id: ""
	I0815 01:30:04.924462   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.924474   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:04.924480   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:04.924543   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:04.972947   66919 cri.go:89] found id: ""
	I0815 01:30:04.972979   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.972991   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:04.972999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:04.973123   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:05.004748   66919 cri.go:89] found id: ""
	I0815 01:30:05.004772   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.004780   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:05.004785   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:05.004850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:05.036064   66919 cri.go:89] found id: ""
	I0815 01:30:05.036093   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.036103   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:05.036110   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:05.036174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:05.074397   66919 cri.go:89] found id: ""
	I0815 01:30:05.074430   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.074457   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:05.074467   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:05.074527   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:05.110796   66919 cri.go:89] found id: ""
	I0815 01:30:05.110821   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.110830   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:05.110836   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:05.110897   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:00.606670   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.875360613s)
	I0815 01:30:00.606701   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0815 01:30:00.606725   66492 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:30:00.606772   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:30:04.297747   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.690945823s)
	I0815 01:30:04.297780   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0815 01:30:04.297811   66492 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:30:04.297881   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:30:05.049009   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 01:30:05.049059   66492 cache_images.go:123] Successfully loaded all cached images
	I0815 01:30:05.049067   66492 cache_images.go:92] duration metric: took 15.041058069s to LoadCachedImages
	I0815 01:30:05.049083   66492 kubeadm.go:934] updating node { 192.168.61.166 8443 v1.31.0 crio true true} ...
	I0815 01:30:05.049215   66492 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-884893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:30:05.049295   66492 ssh_runner.go:195] Run: crio config
	I0815 01:30:05.101896   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:30:05.101915   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:30:05.101925   66492 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:30:05.101953   66492 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.166 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-884893 NodeName:no-preload-884893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:30:05.102129   66492 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.166
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-884893"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.166
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.166"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:30:05.102202   66492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:30:05.114396   66492 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:30:05.114464   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:30:05.124036   66492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0815 01:30:05.141411   66492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:30:05.156888   66492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0815 01:30:05.173630   66492 ssh_runner.go:195] Run: grep 192.168.61.166	control-plane.minikube.internal$ /etc/hosts
	I0815 01:30:05.177421   66492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.166	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:30:05.188839   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:30:02.313495   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:04.812529   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.252826   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:07.254206   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.753065   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.148938   66919 cri.go:89] found id: ""
	I0815 01:30:05.148960   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.148968   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:05.148976   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:05.148986   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:05.202523   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:05.202553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:05.215903   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:05.215935   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:05.294685   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:05.294709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:05.294724   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:05.397494   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:05.397529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:07.946734   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.967265   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:07.967341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:08.005761   66919 cri.go:89] found id: ""
	I0815 01:30:08.005792   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.005808   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:08.005814   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:08.005878   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:08.044124   66919 cri.go:89] found id: ""
	I0815 01:30:08.044154   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.044166   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:08.044173   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:08.044238   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:08.078729   66919 cri.go:89] found id: ""
	I0815 01:30:08.078757   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.078769   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:08.078777   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:08.078841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:08.121988   66919 cri.go:89] found id: ""
	I0815 01:30:08.122020   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.122035   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:08.122042   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:08.122108   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:08.156930   66919 cri.go:89] found id: ""
	I0815 01:30:08.156956   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.156964   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:08.156969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:08.157034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:08.201008   66919 cri.go:89] found id: ""
	I0815 01:30:08.201049   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.201060   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:08.201067   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:08.201128   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:08.241955   66919 cri.go:89] found id: ""
	I0815 01:30:08.241979   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.241987   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:08.241993   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:08.242041   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:08.277271   66919 cri.go:89] found id: ""
	I0815 01:30:08.277307   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.277317   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:08.277328   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:08.277343   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:08.339037   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:08.339082   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:08.355588   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:08.355617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:08.436131   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:08.436157   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:08.436170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:08.541231   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:08.541267   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:05.307306   66492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:30:05.326586   66492 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893 for IP: 192.168.61.166
	I0815 01:30:05.326606   66492 certs.go:194] generating shared ca certs ...
	I0815 01:30:05.326620   66492 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:30:05.326754   66492 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:30:05.326798   66492 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:30:05.326807   66492 certs.go:256] generating profile certs ...
	I0815 01:30:05.326885   66492 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.key
	I0815 01:30:05.326942   66492 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.key.2b09f8c1
	I0815 01:30:05.326975   66492 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.key
	I0815 01:30:05.327152   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:30:05.327216   66492 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:30:05.327231   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:30:05.327260   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:30:05.327292   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:30:05.327315   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:30:05.327353   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:30:05.328116   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:30:05.358988   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:30:05.386047   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:30:05.422046   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:30:05.459608   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 01:30:05.489226   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:30:05.518361   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:30:05.542755   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:30:05.567485   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:30:05.590089   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:30:05.614248   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:30:05.636932   66492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:30:05.652645   66492 ssh_runner.go:195] Run: openssl version
	I0815 01:30:05.658261   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:30:05.668530   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.673009   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.673091   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.678803   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:30:05.689237   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:30:05.699211   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.703378   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.703430   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.708890   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:30:05.718664   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:30:05.729058   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.733298   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.733352   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.738793   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:30:05.749007   66492 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:30:05.753780   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:30:05.759248   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:30:05.764978   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:30:05.770728   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:30:05.775949   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:30:05.781530   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:30:05.786881   66492 kubeadm.go:392] StartCluster: {Name:no-preload-884893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:30:05.786997   66492 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:30:05.787058   66492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:30:05.821591   66492 cri.go:89] found id: ""
	I0815 01:30:05.821662   66492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:30:05.832115   66492 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:30:05.832135   66492 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:30:05.832192   66492 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:30:05.841134   66492 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:30:05.842134   66492 kubeconfig.go:125] found "no-preload-884893" server: "https://192.168.61.166:8443"
	I0815 01:30:05.844248   66492 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:30:05.853112   66492 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.166
	I0815 01:30:05.853149   66492 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:30:05.853161   66492 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:30:05.853200   66492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:30:05.887518   66492 cri.go:89] found id: ""
	I0815 01:30:05.887591   66492 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:30:05.905394   66492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:30:05.914745   66492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:30:05.914763   66492 kubeadm.go:157] found existing configuration files:
	
	I0815 01:30:05.914812   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:30:05.924190   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:30:05.924244   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:30:05.933573   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:30:05.942352   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:30:05.942419   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:30:05.951109   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:30:05.959593   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:30:05.959656   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:30:05.968126   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:30:05.976084   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:30:05.976145   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:30:05.984770   66492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:30:05.993658   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:06.089280   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:06.949649   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.160787   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.231870   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.368542   66492 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:30:07.368644   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.868980   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:08.369588   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:08.395734   66492 api_server.go:72] duration metric: took 1.027190846s to wait for apiserver process to appear ...
	I0815 01:30:08.395760   66492 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:30:08.395782   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:07.313709   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.812159   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.394556   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.394591   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.394610   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.433312   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.433352   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.433366   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.450472   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.450507   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.895986   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.900580   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:30:11.900612   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:30:12.396449   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:12.402073   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:30:12.402097   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:30:12.896742   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:12.902095   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 200:
	ok
	I0815 01:30:12.909261   66492 api_server.go:141] control plane version: v1.31.0
	I0815 01:30:12.909292   66492 api_server.go:131] duration metric: took 4.513523262s to wait for apiserver health ...
	I0815 01:30:12.909304   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:30:12.909312   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:30:12.911002   66492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:30:12.252177   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:14.253401   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.090797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:11.105873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:11.105951   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:11.139481   66919 cri.go:89] found id: ""
	I0815 01:30:11.139509   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.139520   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:11.139528   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:11.139586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:11.176291   66919 cri.go:89] found id: ""
	I0815 01:30:11.176320   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.176329   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:11.176336   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:11.176408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:11.212715   66919 cri.go:89] found id: ""
	I0815 01:30:11.212750   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.212760   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:11.212766   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:11.212824   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:11.247283   66919 cri.go:89] found id: ""
	I0815 01:30:11.247311   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.247321   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:11.247328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:11.247391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:11.280285   66919 cri.go:89] found id: ""
	I0815 01:30:11.280319   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.280332   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:11.280339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:11.280407   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:11.317883   66919 cri.go:89] found id: ""
	I0815 01:30:11.317911   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.317930   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:11.317937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:11.317998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:11.355178   66919 cri.go:89] found id: ""
	I0815 01:30:11.355208   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.355220   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:11.355227   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:11.355287   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:11.390965   66919 cri.go:89] found id: ""
	I0815 01:30:11.390992   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.391004   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:11.391015   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:11.391030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:11.445967   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:11.446004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:11.460539   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:11.460570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:11.537022   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:11.537043   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:11.537058   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:11.625438   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:11.625476   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.175870   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:14.189507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:14.189576   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:14.225227   66919 cri.go:89] found id: ""
	I0815 01:30:14.225255   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.225264   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:14.225271   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:14.225350   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:14.260247   66919 cri.go:89] found id: ""
	I0815 01:30:14.260276   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.260286   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:14.260294   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:14.260364   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:14.295498   66919 cri.go:89] found id: ""
	I0815 01:30:14.295528   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.295538   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:14.295552   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:14.295617   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:14.334197   66919 cri.go:89] found id: ""
	I0815 01:30:14.334228   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.334239   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:14.334247   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:14.334308   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:14.376198   66919 cri.go:89] found id: ""
	I0815 01:30:14.376232   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.376244   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:14.376252   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:14.376313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:14.416711   66919 cri.go:89] found id: ""
	I0815 01:30:14.416744   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.416755   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:14.416763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:14.416823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:14.453890   66919 cri.go:89] found id: ""
	I0815 01:30:14.453917   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.453930   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:14.453952   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:14.454024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:14.497742   66919 cri.go:89] found id: ""
	I0815 01:30:14.497768   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.497776   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:14.497787   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:14.497803   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:14.511938   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:14.511980   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:14.583464   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:14.583490   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:14.583510   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:14.683497   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:14.683540   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.724290   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:14.724327   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:12.912470   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:30:12.924194   66492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:30:12.943292   66492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:30:12.957782   66492 system_pods.go:59] 8 kube-system pods found
	I0815 01:30:12.957825   66492 system_pods.go:61] "coredns-6f6b679f8f-flg2c" [637e4479-8f63-481a-b3d8-c5c4a35ca60a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:30:12.957836   66492 system_pods.go:61] "etcd-no-preload-884893" [f786f812-e4b8-41d4-bf09-1350fee38efb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:30:12.957848   66492 system_pods.go:61] "kube-apiserver-no-preload-884893" [128cfe47-3a25-4d2c-8869-0d2aafa69852] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:30:12.957859   66492 system_pods.go:61] "kube-controller-manager-no-preload-884893" [e1cce704-2092-4350-8b2d-a96b4cb90969] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:30:12.957870   66492 system_pods.go:61] "kube-proxy-l559z" [67d270af-bcf3-4c4a-a917-84a3b4477a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 01:30:12.957889   66492 system_pods.go:61] "kube-scheduler-no-preload-884893" [004b37a2-58c2-431d-b43e-de894b7fa8ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:30:12.957900   66492 system_pods.go:61] "metrics-server-6867b74b74-qnnqs" [397b72b1-60cb-41b6-88c4-cb0c3d9200da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:30:12.957909   66492 system_pods.go:61] "storage-provisioner" [bd489c40-fcf4-400d-af4c-913b511494e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 01:30:12.957919   66492 system_pods.go:74] duration metric: took 14.600496ms to wait for pod list to return data ...
	I0815 01:30:12.957934   66492 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:30:12.964408   66492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:30:12.964437   66492 node_conditions.go:123] node cpu capacity is 2
	I0815 01:30:12.964448   66492 node_conditions.go:105] duration metric: took 6.509049ms to run NodePressure ...
	I0815 01:30:12.964466   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:13.242145   66492 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:30:13.247986   66492 kubeadm.go:739] kubelet initialised
	I0815 01:30:13.248012   66492 kubeadm.go:740] duration metric: took 5.831891ms waiting for restarted kubelet to initialise ...
	I0815 01:30:13.248021   66492 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:30:13.254140   66492 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.260351   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.260378   66492 pod_ready.go:81] duration metric: took 6.20764ms for pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.260388   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.260408   66492 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.265440   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "etcd-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.265464   66492 pod_ready.go:81] duration metric: took 5.046431ms for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.265474   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "etcd-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.265481   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.271153   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "kube-apiserver-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.271173   66492 pod_ready.go:81] duration metric: took 5.686045ms for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.271181   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "kube-apiserver-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.271187   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.346976   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.347001   66492 pod_ready.go:81] duration metric: took 75.806932ms for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.347011   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.347018   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l559z" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.748456   66492 pod_ready.go:92] pod "kube-proxy-l559z" in "kube-system" namespace has status "Ready":"True"
	I0815 01:30:13.748480   66492 pod_ready.go:81] duration metric: took 401.453111ms for pod "kube-proxy-l559z" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.748491   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:11.812458   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:13.813405   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:16.752797   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:19.251123   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:17.277116   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:17.290745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:17.290825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:17.324477   66919 cri.go:89] found id: ""
	I0815 01:30:17.324505   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.324512   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:17.324517   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:17.324573   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:17.356340   66919 cri.go:89] found id: ""
	I0815 01:30:17.356373   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.356384   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:17.356392   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:17.356452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:17.392696   66919 cri.go:89] found id: ""
	I0815 01:30:17.392722   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.392732   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:17.392740   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:17.392802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:17.425150   66919 cri.go:89] found id: ""
	I0815 01:30:17.425182   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.425192   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:17.425200   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:17.425266   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:17.460679   66919 cri.go:89] found id: ""
	I0815 01:30:17.460708   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.460720   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:17.460727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:17.460805   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:17.496881   66919 cri.go:89] found id: ""
	I0815 01:30:17.496914   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.496927   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:17.496933   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:17.496985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:17.528614   66919 cri.go:89] found id: ""
	I0815 01:30:17.528643   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.528668   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:17.528676   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:17.528736   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:17.563767   66919 cri.go:89] found id: ""
	I0815 01:30:17.563792   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.563799   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:17.563809   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:17.563824   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:17.576591   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:17.576619   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:17.647791   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:17.647819   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:17.647832   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:17.722889   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:17.722927   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:17.761118   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:17.761154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:15.756386   66492 pod_ready.go:102] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.255794   66492 pod_ready.go:102] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:19.754538   66492 pod_ready.go:92] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:30:19.754560   66492 pod_ready.go:81] duration metric: took 6.006061814s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:19.754569   66492 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:16.313295   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.313960   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:21.252528   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.753406   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:20.316550   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:20.329377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:20.329452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:20.361773   66919 cri.go:89] found id: ""
	I0815 01:30:20.361805   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.361814   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:20.361820   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:20.361880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:20.394091   66919 cri.go:89] found id: ""
	I0815 01:30:20.394127   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.394138   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:20.394145   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:20.394210   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:20.426882   66919 cri.go:89] found id: ""
	I0815 01:30:20.426910   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.426929   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:20.426937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:20.426998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:20.460629   66919 cri.go:89] found id: ""
	I0815 01:30:20.460678   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.460692   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:20.460699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:20.460764   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:20.492030   66919 cri.go:89] found id: ""
	I0815 01:30:20.492055   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.492063   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:20.492069   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:20.492127   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:20.523956   66919 cri.go:89] found id: ""
	I0815 01:30:20.523986   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.523994   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:20.523999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:20.524058   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:20.556577   66919 cri.go:89] found id: ""
	I0815 01:30:20.556606   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.556617   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:20.556633   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:20.556714   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:20.589322   66919 cri.go:89] found id: ""
	I0815 01:30:20.589357   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.589366   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:20.589374   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:20.589386   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:20.666950   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:20.666993   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:20.703065   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:20.703104   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:20.758120   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:20.758154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:20.773332   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:20.773378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:20.839693   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.340487   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:23.352978   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:23.353034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:23.386376   66919 cri.go:89] found id: ""
	I0815 01:30:23.386401   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.386411   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:23.386418   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:23.386480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:23.422251   66919 cri.go:89] found id: ""
	I0815 01:30:23.422275   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.422283   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:23.422288   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:23.422347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:23.454363   66919 cri.go:89] found id: ""
	I0815 01:30:23.454394   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.454405   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:23.454410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:23.454471   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:23.487211   66919 cri.go:89] found id: ""
	I0815 01:30:23.487240   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.487249   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:23.487255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:23.487313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:23.518655   66919 cri.go:89] found id: ""
	I0815 01:30:23.518680   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.518690   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:23.518695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:23.518749   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:23.553449   66919 cri.go:89] found id: ""
	I0815 01:30:23.553479   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.553489   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:23.553497   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:23.553549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:23.582407   66919 cri.go:89] found id: ""
	I0815 01:30:23.582443   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.582459   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:23.582466   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:23.582519   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:23.612805   66919 cri.go:89] found id: ""
	I0815 01:30:23.612839   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.612849   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:23.612861   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:23.612874   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:23.661661   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:23.661691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:23.674456   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:23.674491   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:23.742734   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.742758   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:23.742772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:23.828791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:23.828830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:21.761680   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.763406   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:20.812796   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.312044   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:25.312289   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:26.252305   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:28.752410   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:26.364924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:26.378354   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:26.378422   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:26.410209   66919 cri.go:89] found id: ""
	I0815 01:30:26.410238   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.410248   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:26.410253   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:26.410299   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:26.443885   66919 cri.go:89] found id: ""
	I0815 01:30:26.443918   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.443929   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:26.443935   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:26.443985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:26.475786   66919 cri.go:89] found id: ""
	I0815 01:30:26.475815   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.475826   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:26.475833   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:26.475898   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:26.510635   66919 cri.go:89] found id: ""
	I0815 01:30:26.510660   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.510669   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:26.510677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:26.510739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:26.542755   66919 cri.go:89] found id: ""
	I0815 01:30:26.542779   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.542787   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:26.542792   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:26.542842   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:26.574825   66919 cri.go:89] found id: ""
	I0815 01:30:26.574896   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.574911   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:26.574919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:26.574979   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:26.612952   66919 cri.go:89] found id: ""
	I0815 01:30:26.612980   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.612991   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:26.612998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:26.613067   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:26.645339   66919 cri.go:89] found id: ""
	I0815 01:30:26.645377   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.645388   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:26.645398   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:26.645415   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:26.659206   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:26.659243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:26.727526   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:26.727552   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:26.727569   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.811277   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:26.811314   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:26.851236   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:26.851270   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.402571   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:29.415017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:29.415095   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:29.448130   66919 cri.go:89] found id: ""
	I0815 01:30:29.448151   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.448159   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:29.448164   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:29.448213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:29.484156   66919 cri.go:89] found id: ""
	I0815 01:30:29.484186   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.484195   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:29.484200   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:29.484248   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:29.519760   66919 cri.go:89] found id: ""
	I0815 01:30:29.519796   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.519806   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:29.519812   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:29.519864   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:29.551336   66919 cri.go:89] found id: ""
	I0815 01:30:29.551363   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.551372   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:29.551377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:29.551428   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:29.584761   66919 cri.go:89] found id: ""
	I0815 01:30:29.584793   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.584804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:29.584811   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:29.584875   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:29.619310   66919 cri.go:89] found id: ""
	I0815 01:30:29.619335   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.619343   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:29.619351   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:29.619408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:29.653976   66919 cri.go:89] found id: ""
	I0815 01:30:29.654005   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.654016   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:29.654030   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:29.654104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:29.685546   66919 cri.go:89] found id: ""
	I0815 01:30:29.685581   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.685588   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:29.685598   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:29.685613   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:29.720766   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:29.720797   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.771174   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:29.771207   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:29.783951   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:29.783979   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:29.853602   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:29.853622   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:29.853634   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.259774   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:28.260345   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:27.312379   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:29.312991   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:31.253803   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:33.752012   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.434032   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:32.447831   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:32.447900   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:32.479056   66919 cri.go:89] found id: ""
	I0815 01:30:32.479086   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.479096   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:32.479102   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:32.479167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:32.511967   66919 cri.go:89] found id: ""
	I0815 01:30:32.512002   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.512014   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:32.512022   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:32.512094   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:32.547410   66919 cri.go:89] found id: ""
	I0815 01:30:32.547433   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.547441   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:32.547446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:32.547494   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:32.580829   66919 cri.go:89] found id: ""
	I0815 01:30:32.580857   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.580867   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:32.580874   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:32.580941   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:32.613391   66919 cri.go:89] found id: ""
	I0815 01:30:32.613502   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.613518   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:32.613529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:32.613619   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:32.645703   66919 cri.go:89] found id: ""
	I0815 01:30:32.645736   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.645747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:32.645754   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:32.645822   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:32.677634   66919 cri.go:89] found id: ""
	I0815 01:30:32.677667   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.677678   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:32.677685   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:32.677740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:32.708400   66919 cri.go:89] found id: ""
	I0815 01:30:32.708481   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.708506   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:32.708521   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:32.708538   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:32.759869   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:32.759907   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:32.773110   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:32.773131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:32.840010   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:32.840031   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:32.840045   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:32.915894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:32.915948   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:30.261620   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.760735   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:34.761802   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:31.813543   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:33.813715   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.752452   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:37.752484   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.752536   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.461001   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:35.473803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:35.473874   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:35.506510   66919 cri.go:89] found id: ""
	I0815 01:30:35.506532   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.506540   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:35.506546   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:35.506593   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:35.540988   66919 cri.go:89] found id: ""
	I0815 01:30:35.541018   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.541028   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:35.541033   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:35.541084   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:35.575687   66919 cri.go:89] found id: ""
	I0815 01:30:35.575713   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.575723   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:35.575730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:35.575789   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:35.606841   66919 cri.go:89] found id: ""
	I0815 01:30:35.606871   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.606878   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:35.606884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:35.606940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:35.641032   66919 cri.go:89] found id: ""
	I0815 01:30:35.641067   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.641079   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:35.641086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:35.641150   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:35.676347   66919 cri.go:89] found id: ""
	I0815 01:30:35.676381   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.676422   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:35.676433   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:35.676497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:35.713609   66919 cri.go:89] found id: ""
	I0815 01:30:35.713634   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.713648   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:35.713655   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:35.713739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:35.751057   66919 cri.go:89] found id: ""
	I0815 01:30:35.751083   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.751094   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:35.751104   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:35.751119   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:35.822909   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:35.822935   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:35.822950   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:35.904146   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:35.904186   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:35.942285   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:35.942316   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:35.990920   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:35.990959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.504900   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:38.518230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:38.518301   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:38.552402   66919 cri.go:89] found id: ""
	I0815 01:30:38.552428   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.552436   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:38.552441   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:38.552500   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:38.588617   66919 cri.go:89] found id: ""
	I0815 01:30:38.588643   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.588668   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:38.588677   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:38.588740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:38.621168   66919 cri.go:89] found id: ""
	I0815 01:30:38.621196   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.621204   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:38.621210   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:38.621258   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:38.654522   66919 cri.go:89] found id: ""
	I0815 01:30:38.654550   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.654559   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:38.654565   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:38.654631   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:38.688710   66919 cri.go:89] found id: ""
	I0815 01:30:38.688735   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.688743   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:38.688748   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:38.688802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:38.720199   66919 cri.go:89] found id: ""
	I0815 01:30:38.720224   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.720235   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:38.720242   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:38.720304   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:38.753996   66919 cri.go:89] found id: ""
	I0815 01:30:38.754026   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.754036   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:38.754043   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:38.754102   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:38.787488   66919 cri.go:89] found id: ""
	I0815 01:30:38.787514   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.787522   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:38.787530   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:38.787542   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:38.840062   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:38.840092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.854501   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:38.854543   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:38.933715   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:38.933749   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:38.933766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:39.010837   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:39.010871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:37.260918   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.263490   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.816265   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:38.313383   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:42.252613   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:44.751882   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:41.552027   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:41.566058   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:41.566136   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:41.603076   66919 cri.go:89] found id: ""
	I0815 01:30:41.603110   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.603123   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:41.603132   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:41.603201   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:41.637485   66919 cri.go:89] found id: ""
	I0815 01:30:41.637524   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.637536   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:41.637543   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:41.637609   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:41.671313   66919 cri.go:89] found id: ""
	I0815 01:30:41.671337   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.671345   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:41.671350   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:41.671399   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:41.704715   66919 cri.go:89] found id: ""
	I0815 01:30:41.704741   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.704752   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:41.704759   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:41.704821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:41.736357   66919 cri.go:89] found id: ""
	I0815 01:30:41.736388   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.736398   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:41.736405   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:41.736465   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:41.770373   66919 cri.go:89] found id: ""
	I0815 01:30:41.770401   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.770409   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:41.770415   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:41.770463   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:41.805965   66919 cri.go:89] found id: ""
	I0815 01:30:41.805990   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.805998   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:41.806003   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:41.806054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:41.841753   66919 cri.go:89] found id: ""
	I0815 01:30:41.841778   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.841786   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:41.841794   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:41.841805   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:41.914515   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:41.914539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:41.914557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:41.988345   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:41.988380   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:42.023814   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:42.023841   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:42.075210   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:42.075243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.589738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:44.602604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:44.602663   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:44.634203   66919 cri.go:89] found id: ""
	I0815 01:30:44.634236   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.634247   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:44.634254   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:44.634341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:44.683449   66919 cri.go:89] found id: ""
	I0815 01:30:44.683480   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.683490   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:44.683495   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:44.683563   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:44.716485   66919 cri.go:89] found id: ""
	I0815 01:30:44.716509   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.716520   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:44.716527   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:44.716595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:44.755708   66919 cri.go:89] found id: ""
	I0815 01:30:44.755737   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.755746   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:44.755755   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:44.755823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:44.791754   66919 cri.go:89] found id: ""
	I0815 01:30:44.791781   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.791790   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:44.791796   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:44.791867   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:44.825331   66919 cri.go:89] found id: ""
	I0815 01:30:44.825355   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.825363   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:44.825369   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:44.825416   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:44.861680   66919 cri.go:89] found id: ""
	I0815 01:30:44.861705   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.861713   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:44.861718   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:44.861770   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:44.898810   66919 cri.go:89] found id: ""
	I0815 01:30:44.898844   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.898857   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:44.898867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:44.898881   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:44.949416   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:44.949449   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.964230   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:44.964258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:45.038989   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:45.039012   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:45.039027   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:45.116311   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:45.116345   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:41.760941   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:43.764802   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:40.811825   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:42.813489   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:45.312497   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:46.753090   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:49.252535   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.658176   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:47.671312   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:47.671375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:47.705772   66919 cri.go:89] found id: ""
	I0815 01:30:47.705800   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.705812   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:47.705819   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:47.705882   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:47.737812   66919 cri.go:89] found id: ""
	I0815 01:30:47.737846   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.737857   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:47.737864   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:47.737928   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:47.773079   66919 cri.go:89] found id: ""
	I0815 01:30:47.773103   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.773114   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:47.773121   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:47.773184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:47.804941   66919 cri.go:89] found id: ""
	I0815 01:30:47.804970   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.804980   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:47.804990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:47.805053   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:47.841215   66919 cri.go:89] found id: ""
	I0815 01:30:47.841249   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.841260   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:47.841266   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:47.841322   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:47.872730   66919 cri.go:89] found id: ""
	I0815 01:30:47.872761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.872772   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:47.872780   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:47.872833   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:47.905731   66919 cri.go:89] found id: ""
	I0815 01:30:47.905761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.905769   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:47.905774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:47.905825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:47.939984   66919 cri.go:89] found id: ""
	I0815 01:30:47.940017   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.940028   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:47.940040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:47.940053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:47.989493   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:47.989526   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:48.002567   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:48.002605   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:48.066691   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:48.066709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:48.066720   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:48.142512   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:48.142551   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:46.260920   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:48.761706   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.813316   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:50.311266   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:51.253220   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.751360   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:50.681288   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:50.695289   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:50.695358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:50.729264   66919 cri.go:89] found id: ""
	I0815 01:30:50.729293   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.729303   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:50.729310   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:50.729374   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:50.765308   66919 cri.go:89] found id: ""
	I0815 01:30:50.765337   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.765348   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:50.765354   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:50.765421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:50.801332   66919 cri.go:89] found id: ""
	I0815 01:30:50.801362   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.801382   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:50.801391   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:50.801452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:50.834822   66919 cri.go:89] found id: ""
	I0815 01:30:50.834855   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.834866   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:50.834873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:50.834937   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:50.868758   66919 cri.go:89] found id: ""
	I0815 01:30:50.868785   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.868804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:50.868817   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:50.868886   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:50.902003   66919 cri.go:89] found id: ""
	I0815 01:30:50.902035   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.902046   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:50.902053   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:50.902113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:50.934517   66919 cri.go:89] found id: ""
	I0815 01:30:50.934546   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.934562   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:50.934569   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:50.934628   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:50.968195   66919 cri.go:89] found id: ""
	I0815 01:30:50.968224   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.968233   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:50.968244   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:50.968258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:51.019140   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:51.019176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:51.032046   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:51.032072   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:51.109532   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:51.109555   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:51.109571   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:51.186978   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:51.187021   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:53.734145   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:53.747075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:53.747146   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:53.779774   66919 cri.go:89] found id: ""
	I0815 01:30:53.779800   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.779807   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:53.779812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:53.779861   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:53.813079   66919 cri.go:89] found id: ""
	I0815 01:30:53.813119   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.813130   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:53.813137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:53.813198   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:53.847148   66919 cri.go:89] found id: ""
	I0815 01:30:53.847179   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.847188   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:53.847195   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:53.847261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:53.880562   66919 cri.go:89] found id: ""
	I0815 01:30:53.880589   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.880596   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:53.880604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:53.880666   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:53.913334   66919 cri.go:89] found id: ""
	I0815 01:30:53.913364   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.913372   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:53.913378   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:53.913436   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:53.946008   66919 cri.go:89] found id: ""
	I0815 01:30:53.946042   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.946052   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:53.946057   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:53.946111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:53.978557   66919 cri.go:89] found id: ""
	I0815 01:30:53.978586   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.978595   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:53.978600   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:53.978653   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:54.010358   66919 cri.go:89] found id: ""
	I0815 01:30:54.010385   66919 logs.go:276] 0 containers: []
	W0815 01:30:54.010392   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:54.010401   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:54.010413   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:54.059780   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:54.059815   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:54.073397   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:54.073428   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:54.140996   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:54.141024   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:54.141039   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:54.215401   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:54.215437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:51.261078   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.261318   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:52.315214   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:54.813501   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:55.751557   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.766434   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:56.756848   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:56.769371   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:56.769434   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:56.806021   66919 cri.go:89] found id: ""
	I0815 01:30:56.806046   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.806076   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:56.806100   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:56.806170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:56.855347   66919 cri.go:89] found id: ""
	I0815 01:30:56.855377   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.855393   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:56.855400   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:56.855464   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:56.898669   66919 cri.go:89] found id: ""
	I0815 01:30:56.898700   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.898710   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:56.898717   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:56.898785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:56.955078   66919 cri.go:89] found id: ""
	I0815 01:30:56.955112   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.955124   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:56.955131   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:56.955205   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:56.987638   66919 cri.go:89] found id: ""
	I0815 01:30:56.987666   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.987674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:56.987680   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:56.987729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:57.019073   66919 cri.go:89] found id: ""
	I0815 01:30:57.019101   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.019109   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:57.019114   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:57.019170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:57.051695   66919 cri.go:89] found id: ""
	I0815 01:30:57.051724   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.051735   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:57.051742   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:57.051804   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:57.085066   66919 cri.go:89] found id: ""
	I0815 01:30:57.085095   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.085106   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:57.085117   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:57.085131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:57.134043   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:57.134080   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:57.147838   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:57.147871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:57.221140   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:57.221174   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:57.221190   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:57.302571   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:57.302607   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:59.841296   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:59.854638   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:59.854700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:59.885940   66919 cri.go:89] found id: ""
	I0815 01:30:59.885963   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.885971   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:59.885976   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:59.886026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:59.918783   66919 cri.go:89] found id: ""
	I0815 01:30:59.918812   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.918824   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:59.918832   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:59.918905   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:59.952122   66919 cri.go:89] found id: ""
	I0815 01:30:59.952153   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.952163   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:59.952169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:59.952233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:59.987303   66919 cri.go:89] found id: ""
	I0815 01:30:59.987331   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.987339   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:59.987344   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:59.987410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:00.024606   66919 cri.go:89] found id: ""
	I0815 01:31:00.024640   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.024666   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:00.024677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:00.024738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:00.055993   66919 cri.go:89] found id: ""
	I0815 01:31:00.056020   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.056031   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:00.056039   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:00.056104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:00.087128   66919 cri.go:89] found id: ""
	I0815 01:31:00.087161   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.087173   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:00.087180   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:00.087249   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:00.120436   66919 cri.go:89] found id: ""
	I0815 01:31:00.120465   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.120476   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:00.120488   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:00.120503   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:55.261504   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.762139   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.312874   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:59.811724   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.252248   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.751908   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.133810   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:00.133838   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:00.199949   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:00.199971   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:00.199984   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:00.284740   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:00.284778   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.321791   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:00.321827   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:02.873253   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:02.885846   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:02.885925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:02.924698   66919 cri.go:89] found id: ""
	I0815 01:31:02.924727   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.924739   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:02.924745   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:02.924807   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:02.961352   66919 cri.go:89] found id: ""
	I0815 01:31:02.961383   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.961391   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:02.961396   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:02.961450   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:02.996293   66919 cri.go:89] found id: ""
	I0815 01:31:02.996327   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.996334   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:02.996341   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:02.996391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:03.028976   66919 cri.go:89] found id: ""
	I0815 01:31:03.029005   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.029013   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:03.029019   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:03.029066   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:03.063388   66919 cri.go:89] found id: ""
	I0815 01:31:03.063425   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.063436   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:03.063445   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:03.063518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:03.099730   66919 cri.go:89] found id: ""
	I0815 01:31:03.099757   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.099767   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:03.099778   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:03.099841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:03.132347   66919 cri.go:89] found id: ""
	I0815 01:31:03.132370   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.132380   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:03.132386   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:03.132495   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:03.165120   66919 cri.go:89] found id: ""
	I0815 01:31:03.165146   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.165153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:03.165161   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:03.165173   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:03.217544   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:03.217576   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:03.232299   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:03.232341   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:03.297458   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:03.297484   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:03.297500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:03.377304   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:03.377338   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.261621   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.760996   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:04.762492   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:01.814111   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:04.311963   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.251139   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:07.252081   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:09.253611   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.915544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:05.929154   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:05.929231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:05.972008   66919 cri.go:89] found id: ""
	I0815 01:31:05.972037   66919 logs.go:276] 0 containers: []
	W0815 01:31:05.972048   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:05.972055   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:05.972119   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:06.005459   66919 cri.go:89] found id: ""
	I0815 01:31:06.005486   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.005494   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:06.005499   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:06.005550   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:06.037623   66919 cri.go:89] found id: ""
	I0815 01:31:06.037655   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.037666   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:06.037674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:06.037733   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:06.070323   66919 cri.go:89] found id: ""
	I0815 01:31:06.070347   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.070356   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:06.070361   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:06.070419   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:06.103570   66919 cri.go:89] found id: ""
	I0815 01:31:06.103593   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.103601   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:06.103606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:06.103654   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:06.136253   66919 cri.go:89] found id: ""
	I0815 01:31:06.136281   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.136291   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:06.136297   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:06.136356   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:06.170851   66919 cri.go:89] found id: ""
	I0815 01:31:06.170878   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.170890   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:06.170895   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:06.170942   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:06.205836   66919 cri.go:89] found id: ""
	I0815 01:31:06.205860   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.205867   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:06.205876   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:06.205892   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:06.282838   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:06.282872   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:06.323867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:06.323898   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:06.378187   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:06.378230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:06.393126   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:06.393160   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:06.460898   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:08.961182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:08.973963   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:08.974048   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:09.007466   66919 cri.go:89] found id: ""
	I0815 01:31:09.007494   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.007502   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:09.007509   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:09.007567   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:09.045097   66919 cri.go:89] found id: ""
	I0815 01:31:09.045123   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.045131   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:09.045137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:09.045187   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:09.078326   66919 cri.go:89] found id: ""
	I0815 01:31:09.078356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.078380   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:09.078389   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:09.078455   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:09.109430   66919 cri.go:89] found id: ""
	I0815 01:31:09.109460   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.109471   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:09.109478   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:09.109544   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:09.143200   66919 cri.go:89] found id: ""
	I0815 01:31:09.143225   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.143234   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:09.143239   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:09.143306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:09.179057   66919 cri.go:89] found id: ""
	I0815 01:31:09.179081   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.179089   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:09.179095   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:09.179141   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:09.213327   66919 cri.go:89] found id: ""
	I0815 01:31:09.213356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.213368   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:09.213375   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:09.213425   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:09.246716   66919 cri.go:89] found id: ""
	I0815 01:31:09.246745   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.246756   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:09.246763   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:09.246775   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:09.299075   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:09.299105   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:09.313023   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:09.313054   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:09.377521   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:09.377545   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:09.377557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:09.453791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:09.453830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:07.260671   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:09.261005   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:06.313082   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:08.812290   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.753344   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.251251   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.991473   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:12.004615   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:12.004707   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:12.045028   66919 cri.go:89] found id: ""
	I0815 01:31:12.045057   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.045066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:12.045072   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:12.045121   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:12.077887   66919 cri.go:89] found id: ""
	I0815 01:31:12.077910   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.077920   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:12.077926   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:12.077974   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:12.110214   66919 cri.go:89] found id: ""
	I0815 01:31:12.110249   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.110260   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:12.110268   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:12.110328   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:12.142485   66919 cri.go:89] found id: ""
	I0815 01:31:12.142509   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.142516   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:12.142522   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:12.142572   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:12.176921   66919 cri.go:89] found id: ""
	I0815 01:31:12.176951   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.176962   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:12.176969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:12.177030   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:12.212093   66919 cri.go:89] found id: ""
	I0815 01:31:12.212142   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.212154   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:12.212162   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:12.212216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:12.246980   66919 cri.go:89] found id: ""
	I0815 01:31:12.247007   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.247017   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:12.247024   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:12.247082   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:12.280888   66919 cri.go:89] found id: ""
	I0815 01:31:12.280918   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.280931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:12.280943   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:12.280959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:12.333891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:12.333923   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:12.346753   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:12.346783   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:12.415652   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:12.415675   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:12.415692   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:12.494669   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:12.494706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.031185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:15.044605   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:15.044704   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:15.081810   66919 cri.go:89] found id: ""
	I0815 01:31:15.081846   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.081860   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:15.081869   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:15.081932   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:15.113517   66919 cri.go:89] found id: ""
	I0815 01:31:15.113550   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.113562   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:15.113568   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:15.113641   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:11.762158   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.260892   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.314672   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:13.811754   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:16.751293   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.752458   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.147638   66919 cri.go:89] found id: ""
	I0815 01:31:15.147665   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.147673   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:15.147679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:15.147746   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:15.178938   66919 cri.go:89] found id: ""
	I0815 01:31:15.178966   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.178976   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:15.178990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:15.179054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:15.212304   66919 cri.go:89] found id: ""
	I0815 01:31:15.212333   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.212346   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:15.212353   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:15.212414   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:15.245991   66919 cri.go:89] found id: ""
	I0815 01:31:15.246012   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.246019   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:15.246025   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:15.246074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:15.280985   66919 cri.go:89] found id: ""
	I0815 01:31:15.281016   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.281034   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:15.281041   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:15.281105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:15.315902   66919 cri.go:89] found id: ""
	I0815 01:31:15.315939   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.315948   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:15.315958   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:15.315973   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:15.329347   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:15.329375   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:15.400366   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:15.400388   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:15.400405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:15.479074   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:15.479118   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.516204   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:15.516230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.070588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:18.083120   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:18.083196   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:18.115673   66919 cri.go:89] found id: ""
	I0815 01:31:18.115701   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.115709   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:18.115715   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:18.115772   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:18.147011   66919 cri.go:89] found id: ""
	I0815 01:31:18.147039   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.147047   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:18.147053   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:18.147126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:18.179937   66919 cri.go:89] found id: ""
	I0815 01:31:18.179960   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.179968   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:18.179973   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:18.180032   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:18.214189   66919 cri.go:89] found id: ""
	I0815 01:31:18.214216   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.214224   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:18.214230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:18.214289   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:18.252102   66919 cri.go:89] found id: ""
	I0815 01:31:18.252130   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.252137   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:18.252143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:18.252204   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:18.285481   66919 cri.go:89] found id: ""
	I0815 01:31:18.285519   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.285529   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:18.285536   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:18.285599   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:18.321609   66919 cri.go:89] found id: ""
	I0815 01:31:18.321636   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.321651   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:18.321660   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:18.321723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:18.352738   66919 cri.go:89] found id: ""
	I0815 01:31:18.352766   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.352774   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:18.352782   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:18.352796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.401481   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:18.401517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:18.414984   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:18.415016   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:18.485539   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:18.485559   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:18.485579   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:18.569611   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:18.569651   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:16.262086   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.760590   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.812958   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:17.813230   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:20.312988   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.255232   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.751939   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.109609   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:21.123972   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:21.124038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:21.157591   66919 cri.go:89] found id: ""
	I0815 01:31:21.157624   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.157636   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:21.157643   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:21.157700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:21.192506   66919 cri.go:89] found id: ""
	I0815 01:31:21.192535   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.192545   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:21.192552   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:21.192623   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:21.224873   66919 cri.go:89] found id: ""
	I0815 01:31:21.224901   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.224912   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:21.224919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:21.224980   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:21.258398   66919 cri.go:89] found id: ""
	I0815 01:31:21.258427   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.258438   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:21.258446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:21.258513   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:21.295754   66919 cri.go:89] found id: ""
	I0815 01:31:21.295781   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.295792   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:21.295799   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:21.295870   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:21.330174   66919 cri.go:89] found id: ""
	I0815 01:31:21.330195   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.330202   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:21.330207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:21.330255   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:21.364565   66919 cri.go:89] found id: ""
	I0815 01:31:21.364588   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.364596   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:21.364639   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:21.364717   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:21.397889   66919 cri.go:89] found id: ""
	I0815 01:31:21.397920   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.397931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:21.397942   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:21.397961   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.471788   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:21.471822   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:21.508837   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:21.508867   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:21.560538   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:21.560575   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:21.575581   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:21.575622   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:21.647798   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.148566   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:24.160745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:24.160813   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:24.192535   66919 cri.go:89] found id: ""
	I0815 01:31:24.192558   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.192566   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:24.192572   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:24.192630   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:24.223468   66919 cri.go:89] found id: ""
	I0815 01:31:24.223499   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.223507   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:24.223513   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:24.223561   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:24.258905   66919 cri.go:89] found id: ""
	I0815 01:31:24.258931   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.258938   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:24.258944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:24.259006   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:24.298914   66919 cri.go:89] found id: ""
	I0815 01:31:24.298942   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.298949   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:24.298955   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:24.299011   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:24.331962   66919 cri.go:89] found id: ""
	I0815 01:31:24.331992   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.332003   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:24.332011   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:24.332078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:24.365984   66919 cri.go:89] found id: ""
	I0815 01:31:24.366014   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.366022   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:24.366028   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:24.366078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:24.402397   66919 cri.go:89] found id: ""
	I0815 01:31:24.402432   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.402442   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:24.402450   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:24.402516   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:24.434662   66919 cri.go:89] found id: ""
	I0815 01:31:24.434691   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.434704   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:24.434714   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:24.434730   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:24.474087   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:24.474117   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:24.524494   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:24.524533   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:24.537770   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:24.537795   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:24.608594   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.608634   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:24.608650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.260845   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.260974   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:22.811939   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:24.812873   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:26.252688   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.751413   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.191588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:27.206339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:27.206421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:27.241277   66919 cri.go:89] found id: ""
	I0815 01:31:27.241306   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.241315   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:27.241321   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:27.241385   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:27.275952   66919 cri.go:89] found id: ""
	I0815 01:31:27.275983   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.275992   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:27.275998   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:27.276060   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:27.308320   66919 cri.go:89] found id: ""
	I0815 01:31:27.308348   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.308359   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:27.308366   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:27.308424   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:27.340957   66919 cri.go:89] found id: ""
	I0815 01:31:27.340987   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.340998   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:27.341007   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:27.341135   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:27.373078   66919 cri.go:89] found id: ""
	I0815 01:31:27.373102   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.373110   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:27.373117   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:27.373182   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:27.409250   66919 cri.go:89] found id: ""
	I0815 01:31:27.409277   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.409289   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:27.409296   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:27.409358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:27.444244   66919 cri.go:89] found id: ""
	I0815 01:31:27.444270   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.444280   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:27.444287   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:27.444360   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:27.482507   66919 cri.go:89] found id: ""
	I0815 01:31:27.482535   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.482543   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:27.482552   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:27.482570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:27.521896   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:27.521931   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:27.575404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:27.575437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:27.587713   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:27.587745   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:27.650431   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:27.650461   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:27.650475   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:25.761255   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.261210   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.312866   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:29.812673   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:30.752414   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:33.252178   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:30.228663   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:30.242782   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:30.242852   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:30.278385   66919 cri.go:89] found id: ""
	I0815 01:31:30.278410   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.278420   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:30.278428   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:30.278483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:30.316234   66919 cri.go:89] found id: ""
	I0815 01:31:30.316258   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.316268   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:30.316276   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:30.316335   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:30.348738   66919 cri.go:89] found id: ""
	I0815 01:31:30.348767   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.348778   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:30.348787   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:30.348851   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:30.380159   66919 cri.go:89] found id: ""
	I0815 01:31:30.380189   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.380201   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:30.380208   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:30.380261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:30.414888   66919 cri.go:89] found id: ""
	I0815 01:31:30.414911   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.414919   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:30.414924   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:30.414977   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:30.447898   66919 cri.go:89] found id: ""
	I0815 01:31:30.447923   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.447931   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:30.447937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:30.448024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:30.479148   66919 cri.go:89] found id: ""
	I0815 01:31:30.479177   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.479187   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:30.479193   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:30.479245   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:30.511725   66919 cri.go:89] found id: ""
	I0815 01:31:30.511752   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.511760   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:30.511768   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:30.511780   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:30.562554   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:30.562590   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:30.575869   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:30.575896   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:30.642642   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:30.642662   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:30.642675   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:30.734491   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:30.734530   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:33.276918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:33.289942   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:33.290010   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:33.322770   66919 cri.go:89] found id: ""
	I0815 01:31:33.322799   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.322806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:33.322813   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:33.322862   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:33.359474   66919 cri.go:89] found id: ""
	I0815 01:31:33.359503   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.359513   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:33.359520   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:33.359590   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:33.391968   66919 cri.go:89] found id: ""
	I0815 01:31:33.391996   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.392007   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:33.392014   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:33.392076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:33.423830   66919 cri.go:89] found id: ""
	I0815 01:31:33.423853   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.423861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:33.423866   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:33.423914   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:33.454991   66919 cri.go:89] found id: ""
	I0815 01:31:33.455014   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.455022   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:33.455027   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:33.455076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:33.492150   66919 cri.go:89] found id: ""
	I0815 01:31:33.492173   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.492181   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:33.492187   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:33.492236   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:33.525206   66919 cri.go:89] found id: ""
	I0815 01:31:33.525237   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.525248   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:33.525255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:33.525331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:33.558939   66919 cri.go:89] found id: ""
	I0815 01:31:33.558973   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.558984   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:33.558995   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:33.559011   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:33.616977   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:33.617029   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:33.629850   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:33.629879   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:33.698029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:33.698052   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:33.698069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:33.776609   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:33.776641   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:30.261492   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.761417   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.761672   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.315096   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.811837   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:35.751307   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:37.753280   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.320299   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:36.333429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:36.333492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:36.366810   66919 cri.go:89] found id: ""
	I0815 01:31:36.366846   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.366858   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:36.366866   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:36.366918   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:36.405898   66919 cri.go:89] found id: ""
	I0815 01:31:36.405930   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.405942   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:36.405949   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:36.406017   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:36.471396   66919 cri.go:89] found id: ""
	I0815 01:31:36.471432   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.471445   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:36.471453   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:36.471524   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:36.504319   66919 cri.go:89] found id: ""
	I0815 01:31:36.504355   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.504367   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:36.504373   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:36.504430   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:36.542395   66919 cri.go:89] found id: ""
	I0815 01:31:36.542423   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.542431   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:36.542437   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:36.542492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:36.576279   66919 cri.go:89] found id: ""
	I0815 01:31:36.576310   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.576320   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:36.576327   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:36.576391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:36.609215   66919 cri.go:89] found id: ""
	I0815 01:31:36.609243   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.609251   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:36.609256   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:36.609306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:36.641911   66919 cri.go:89] found id: ""
	I0815 01:31:36.641936   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.641944   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:36.641952   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:36.641964   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:36.691751   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:36.691784   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:36.704619   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:36.704644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:36.768328   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:36.768348   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:36.768360   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:36.843727   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:36.843759   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:39.381851   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:39.396205   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:39.396284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:39.430646   66919 cri.go:89] found id: ""
	I0815 01:31:39.430673   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.430681   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:39.430688   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:39.430751   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:39.468470   66919 cri.go:89] found id: ""
	I0815 01:31:39.468504   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.468517   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:39.468526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:39.468603   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:39.500377   66919 cri.go:89] found id: ""
	I0815 01:31:39.500407   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.500416   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:39.500423   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:39.500490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:39.532411   66919 cri.go:89] found id: ""
	I0815 01:31:39.532440   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.532447   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:39.532452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:39.532504   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:39.564437   66919 cri.go:89] found id: ""
	I0815 01:31:39.564463   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.564471   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:39.564476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:39.564528   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:39.598732   66919 cri.go:89] found id: ""
	I0815 01:31:39.598757   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.598765   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:39.598771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:39.598837   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:39.640429   66919 cri.go:89] found id: ""
	I0815 01:31:39.640457   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.640469   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:39.640476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:39.640536   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:39.672116   66919 cri.go:89] found id: ""
	I0815 01:31:39.672142   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.672151   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:39.672159   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:39.672171   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:39.721133   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:39.721170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:39.734024   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:39.734060   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:39.799465   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:39.799487   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:39.799501   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:39.880033   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:39.880068   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:37.263319   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:39.762708   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.812954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:39.312718   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:40.251411   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.252627   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.750964   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.421276   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:42.438699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:42.438760   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:42.473213   66919 cri.go:89] found id: ""
	I0815 01:31:42.473239   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.473246   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:42.473251   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:42.473311   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:42.509493   66919 cri.go:89] found id: ""
	I0815 01:31:42.509523   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.509533   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:42.509538   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:42.509594   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:42.543625   66919 cri.go:89] found id: ""
	I0815 01:31:42.543649   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.543659   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:42.543665   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:42.543731   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:42.581756   66919 cri.go:89] found id: ""
	I0815 01:31:42.581784   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.581794   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:42.581801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:42.581865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:42.615595   66919 cri.go:89] found id: ""
	I0815 01:31:42.615618   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.615626   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:42.615631   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:42.615689   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:42.652938   66919 cri.go:89] found id: ""
	I0815 01:31:42.652961   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.652973   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:42.652979   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:42.653026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:42.689362   66919 cri.go:89] found id: ""
	I0815 01:31:42.689391   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.689399   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:42.689406   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:42.689460   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:42.725880   66919 cri.go:89] found id: ""
	I0815 01:31:42.725903   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.725911   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:42.725920   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:42.725932   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:42.798531   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:42.798553   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:42.798567   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:42.878583   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:42.878617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:42.916218   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:42.916245   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:42.968613   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:42.968650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:42.260936   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.262272   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:41.315219   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:43.812950   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:46.751554   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.752369   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.482622   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:45.494847   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:45.494917   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:45.526105   66919 cri.go:89] found id: ""
	I0815 01:31:45.526130   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.526139   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:45.526145   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:45.526195   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:45.558218   66919 cri.go:89] found id: ""
	I0815 01:31:45.558247   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.558258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:45.558265   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:45.558327   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:45.589922   66919 cri.go:89] found id: ""
	I0815 01:31:45.589950   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.589961   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:45.589969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:45.590037   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:45.622639   66919 cri.go:89] found id: ""
	I0815 01:31:45.622670   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.622685   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:45.622690   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:45.622740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:45.659274   66919 cri.go:89] found id: ""
	I0815 01:31:45.659301   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.659309   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:45.659314   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:45.659362   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:45.690768   66919 cri.go:89] found id: ""
	I0815 01:31:45.690795   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.690804   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:45.690810   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:45.690860   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:45.726862   66919 cri.go:89] found id: ""
	I0815 01:31:45.726885   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.726892   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:45.726898   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:45.726943   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:45.761115   66919 cri.go:89] found id: ""
	I0815 01:31:45.761142   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.761153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:45.761164   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:45.761179   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:45.774290   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:45.774335   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:45.843029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:45.843053   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:45.843069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:45.918993   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:45.919032   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:45.955647   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:45.955685   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.506376   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:48.518173   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:48.518234   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:48.550773   66919 cri.go:89] found id: ""
	I0815 01:31:48.550798   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.550806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:48.550812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:48.550865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:48.582398   66919 cri.go:89] found id: ""
	I0815 01:31:48.582431   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.582442   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:48.582449   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:48.582512   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:48.613989   66919 cri.go:89] found id: ""
	I0815 01:31:48.614023   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.614036   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:48.614045   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:48.614114   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:48.645269   66919 cri.go:89] found id: ""
	I0815 01:31:48.645306   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.645317   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:48.645326   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:48.645394   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:48.680588   66919 cri.go:89] found id: ""
	I0815 01:31:48.680615   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.680627   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:48.680636   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:48.680723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:48.719580   66919 cri.go:89] found id: ""
	I0815 01:31:48.719607   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.719615   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:48.719621   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:48.719684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:48.756573   66919 cri.go:89] found id: ""
	I0815 01:31:48.756597   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.756606   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:48.756613   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:48.756684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:48.793983   66919 cri.go:89] found id: ""
	I0815 01:31:48.794018   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.794029   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:48.794040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:48.794053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.847776   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:48.847811   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:48.870731   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:48.870762   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:48.960519   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:48.960548   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:48.960565   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:49.037502   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:49.037535   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:46.761461   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.761907   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.813203   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.313262   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:51.251455   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:53.252808   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:51.576022   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:51.589531   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:51.589595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:51.623964   66919 cri.go:89] found id: ""
	I0815 01:31:51.623991   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.624000   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:51.624008   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:51.624074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:51.657595   66919 cri.go:89] found id: ""
	I0815 01:31:51.657618   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.657626   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:51.657632   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:51.657681   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:51.692462   66919 cri.go:89] found id: ""
	I0815 01:31:51.692490   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.692501   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:51.692507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:51.692570   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:51.724210   66919 cri.go:89] found id: ""
	I0815 01:31:51.724249   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.724259   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:51.724267   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:51.724329   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:51.756450   66919 cri.go:89] found id: ""
	I0815 01:31:51.756476   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.756486   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:51.756493   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:51.756555   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:51.789082   66919 cri.go:89] found id: ""
	I0815 01:31:51.789114   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.789126   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:51.789133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:51.789183   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:51.822390   66919 cri.go:89] found id: ""
	I0815 01:31:51.822420   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.822431   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:51.822438   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:51.822491   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:51.855977   66919 cri.go:89] found id: ""
	I0815 01:31:51.856004   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.856014   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:51.856025   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:51.856040   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:51.904470   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:51.904500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:51.918437   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:51.918466   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:51.991742   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:51.991770   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:51.991785   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:52.065894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:52.065926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:54.602000   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:54.616388   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:54.616466   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:54.675750   66919 cri.go:89] found id: ""
	I0815 01:31:54.675779   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.675793   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:54.675802   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:54.675857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:54.710581   66919 cri.go:89] found id: ""
	I0815 01:31:54.710609   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.710620   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:54.710627   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:54.710691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:54.747267   66919 cri.go:89] found id: ""
	I0815 01:31:54.747304   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.747316   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:54.747325   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:54.747387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:54.784175   66919 cri.go:89] found id: ""
	I0815 01:31:54.784209   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.784221   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:54.784230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:54.784295   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:54.820360   66919 cri.go:89] found id: ""
	I0815 01:31:54.820395   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.820405   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:54.820412   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:54.820480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:54.853176   66919 cri.go:89] found id: ""
	I0815 01:31:54.853204   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.853214   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:54.853222   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:54.853281   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:54.886063   66919 cri.go:89] found id: ""
	I0815 01:31:54.886092   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.886105   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:54.886112   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:54.886171   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:54.919495   66919 cri.go:89] found id: ""
	I0815 01:31:54.919529   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.919540   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:54.919558   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:54.919574   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:54.973177   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:54.973213   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:54.986864   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:54.986899   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:55.052637   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:55.052685   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:55.052700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:51.260314   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:53.261883   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:50.812208   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:52.812356   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:54.812990   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:55.750709   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.751319   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.752400   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:55.133149   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:55.133180   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:57.672833   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:57.686035   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:57.686099   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:57.718612   66919 cri.go:89] found id: ""
	I0815 01:31:57.718641   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.718653   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:57.718661   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:57.718738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:57.752763   66919 cri.go:89] found id: ""
	I0815 01:31:57.752781   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.752788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:57.752793   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:57.752840   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:57.785667   66919 cri.go:89] found id: ""
	I0815 01:31:57.785697   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.785709   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:57.785716   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:57.785776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:57.818775   66919 cri.go:89] found id: ""
	I0815 01:31:57.818804   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.818813   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:57.818821   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:57.818881   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:57.853766   66919 cri.go:89] found id: ""
	I0815 01:31:57.853798   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.853809   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:57.853815   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:57.853880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:57.886354   66919 cri.go:89] found id: ""
	I0815 01:31:57.886379   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.886386   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:57.886392   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:57.886453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:57.920522   66919 cri.go:89] found id: ""
	I0815 01:31:57.920553   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.920576   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:57.920583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:57.920648   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:57.952487   66919 cri.go:89] found id: ""
	I0815 01:31:57.952511   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.952520   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:57.952528   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:57.952541   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:58.003026   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:58.003064   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:58.016516   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:58.016544   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:58.091434   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:58.091459   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:58.091500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:58.170038   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:58.170073   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:55.760430   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.760719   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.761206   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.313073   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.812268   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:02.252033   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:04.252260   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:00.709797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:00.724086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:00.724162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:00.756025   66919 cri.go:89] found id: ""
	I0815 01:32:00.756056   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.756066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:00.756073   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:00.756130   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:00.787831   66919 cri.go:89] found id: ""
	I0815 01:32:00.787858   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.787870   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:00.787880   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:00.787940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:00.821605   66919 cri.go:89] found id: ""
	I0815 01:32:00.821637   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.821644   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:00.821649   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:00.821697   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:00.852708   66919 cri.go:89] found id: ""
	I0815 01:32:00.852732   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.852739   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:00.852745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:00.852790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:00.885392   66919 cri.go:89] found id: ""
	I0815 01:32:00.885426   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.885437   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:00.885446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:00.885506   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:00.916715   66919 cri.go:89] found id: ""
	I0815 01:32:00.916751   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.916763   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:00.916771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:00.916890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:00.949028   66919 cri.go:89] found id: ""
	I0815 01:32:00.949058   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.949069   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:00.949076   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:00.949137   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:00.986364   66919 cri.go:89] found id: ""
	I0815 01:32:00.986399   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.986409   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:00.986419   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:00.986433   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.036475   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:01.036517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:01.049711   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:01.049746   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:01.117283   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:01.117310   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:01.117328   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:01.195453   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:01.195492   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:03.732372   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:03.745944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:03.746005   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:03.780527   66919 cri.go:89] found id: ""
	I0815 01:32:03.780566   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.780578   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:03.780586   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:03.780647   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:03.814147   66919 cri.go:89] found id: ""
	I0815 01:32:03.814170   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.814177   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:03.814184   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:03.814267   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:03.847375   66919 cri.go:89] found id: ""
	I0815 01:32:03.847409   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.847422   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:03.847429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:03.847497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:03.882859   66919 cri.go:89] found id: ""
	I0815 01:32:03.882887   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.882897   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:03.882904   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:03.882972   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:03.916490   66919 cri.go:89] found id: ""
	I0815 01:32:03.916520   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.916528   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:03.916544   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:03.916613   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:03.954789   66919 cri.go:89] found id: ""
	I0815 01:32:03.954819   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.954836   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:03.954844   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:03.954907   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:03.987723   66919 cri.go:89] found id: ""
	I0815 01:32:03.987748   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.987756   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:03.987761   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:03.987810   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:04.020948   66919 cri.go:89] found id: ""
	I0815 01:32:04.020974   66919 logs.go:276] 0 containers: []
	W0815 01:32:04.020981   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:04.020990   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:04.021008   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:04.033466   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:04.033489   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:04.097962   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:04.097989   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:04.098006   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:04.174672   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:04.174706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:04.216198   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:04.216228   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.761354   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:03.762268   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:02.313003   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:04.812280   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.751582   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:08.752387   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.768102   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:06.782370   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:06.782473   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:06.815958   66919 cri.go:89] found id: ""
	I0815 01:32:06.815983   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.815992   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:06.815999   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:06.816059   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:06.848701   66919 cri.go:89] found id: ""
	I0815 01:32:06.848735   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.848748   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:06.848756   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:06.848821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:06.879506   66919 cri.go:89] found id: ""
	I0815 01:32:06.879536   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.879544   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:06.879550   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:06.879607   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:06.915332   66919 cri.go:89] found id: ""
	I0815 01:32:06.915359   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.915371   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:06.915377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:06.915438   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:06.949424   66919 cri.go:89] found id: ""
	I0815 01:32:06.949454   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.949464   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:06.949471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:06.949518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:06.983713   66919 cri.go:89] found id: ""
	I0815 01:32:06.983739   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.983747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:06.983753   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:06.983816   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:07.016165   66919 cri.go:89] found id: ""
	I0815 01:32:07.016196   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.016207   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:07.016214   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:07.016271   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:07.048368   66919 cri.go:89] found id: ""
	I0815 01:32:07.048399   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.048410   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:07.048420   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:07.048435   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:07.100088   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:07.100128   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:07.113430   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:07.113459   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:07.178199   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:07.178223   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:07.178239   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:07.265089   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:07.265121   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:09.804733   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:09.819456   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:09.819530   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:09.850946   66919 cri.go:89] found id: ""
	I0815 01:32:09.850974   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.850981   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:09.850986   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:09.851043   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:09.888997   66919 cri.go:89] found id: ""
	I0815 01:32:09.889028   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.889039   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:09.889045   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:09.889105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:09.921455   66919 cri.go:89] found id: ""
	I0815 01:32:09.921490   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.921503   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:09.921511   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:09.921587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:09.957365   66919 cri.go:89] found id: ""
	I0815 01:32:09.957394   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.957410   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:09.957417   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:09.957477   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:09.988716   66919 cri.go:89] found id: ""
	I0815 01:32:09.988740   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.988753   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:09.988760   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:09.988823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:10.024121   66919 cri.go:89] found id: ""
	I0815 01:32:10.024148   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.024155   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:10.024160   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:10.024208   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:10.056210   66919 cri.go:89] found id: ""
	I0815 01:32:10.056237   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.056247   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:10.056253   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:10.056314   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:10.087519   66919 cri.go:89] found id: ""
	I0815 01:32:10.087551   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.087562   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:10.087574   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:10.087589   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:06.260821   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:08.760889   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.813185   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:09.312608   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:11.251168   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.252911   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:10.142406   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:10.142446   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:10.156134   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:10.156176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:10.230397   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:10.230419   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:10.230432   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:10.315187   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:10.315221   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:12.852055   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:12.864410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:12.864479   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:12.895777   66919 cri.go:89] found id: ""
	I0815 01:32:12.895811   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.895821   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:12.895831   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:12.895902   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:12.928135   66919 cri.go:89] found id: ""
	I0815 01:32:12.928161   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.928171   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:12.928178   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:12.928244   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:12.961837   66919 cri.go:89] found id: ""
	I0815 01:32:12.961867   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.961878   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:12.961885   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:12.961947   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:12.997899   66919 cri.go:89] found id: ""
	I0815 01:32:12.997928   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.997939   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:12.997946   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:12.998008   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:13.032686   66919 cri.go:89] found id: ""
	I0815 01:32:13.032716   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.032725   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:13.032730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:13.032783   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:13.064395   66919 cri.go:89] found id: ""
	I0815 01:32:13.064431   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.064444   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:13.064452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:13.064522   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:13.103618   66919 cri.go:89] found id: ""
	I0815 01:32:13.103646   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.103655   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:13.103661   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:13.103711   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:13.137650   66919 cri.go:89] found id: ""
	I0815 01:32:13.137684   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.137694   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:13.137702   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:13.137715   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:13.189803   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:13.189836   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:13.204059   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:13.204091   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:13.273702   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:13.273723   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:13.273735   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:13.358979   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:13.359037   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:11.260422   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.260760   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:11.812182   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.812777   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:15.752291   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:17.752500   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:15.899388   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:15.911944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:15.912013   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:15.946179   66919 cri.go:89] found id: ""
	I0815 01:32:15.946206   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.946215   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:15.946223   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:15.946284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:15.979700   66919 cri.go:89] found id: ""
	I0815 01:32:15.979725   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.979732   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:15.979738   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:15.979784   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:16.013003   66919 cri.go:89] found id: ""
	I0815 01:32:16.013033   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.013044   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:16.013056   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:16.013113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:16.044824   66919 cri.go:89] found id: ""
	I0815 01:32:16.044851   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.044861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:16.044868   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:16.044930   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:16.076193   66919 cri.go:89] found id: ""
	I0815 01:32:16.076219   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.076227   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:16.076232   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:16.076280   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:16.113747   66919 cri.go:89] found id: ""
	I0815 01:32:16.113775   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.113785   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:16.113795   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:16.113855   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:16.145504   66919 cri.go:89] found id: ""
	I0815 01:32:16.145547   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.145560   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:16.145568   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:16.145637   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:16.181581   66919 cri.go:89] found id: ""
	I0815 01:32:16.181613   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.181623   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:16.181634   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:16.181655   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:16.223644   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:16.223687   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:16.279096   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:16.279131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:16.292132   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:16.292161   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:16.360605   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:16.360624   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:16.360636   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:18.938884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:18.951884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:18.951966   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:18.989163   66919 cri.go:89] found id: ""
	I0815 01:32:18.989192   66919 logs.go:276] 0 containers: []
	W0815 01:32:18.989201   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:18.989206   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:18.989256   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:19.025915   66919 cri.go:89] found id: ""
	I0815 01:32:19.025943   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.025952   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:19.025960   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:19.026028   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:19.062863   66919 cri.go:89] found id: ""
	I0815 01:32:19.062889   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.062899   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:19.062907   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:19.062969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:19.099336   66919 cri.go:89] found id: ""
	I0815 01:32:19.099358   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.099369   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:19.099383   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:19.099442   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:19.130944   66919 cri.go:89] found id: ""
	I0815 01:32:19.130977   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.130988   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:19.130995   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:19.131056   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:19.161353   66919 cri.go:89] found id: ""
	I0815 01:32:19.161381   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.161391   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:19.161398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:19.161454   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:19.195867   66919 cri.go:89] found id: ""
	I0815 01:32:19.195902   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.195915   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:19.195923   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:19.195993   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:19.228851   66919 cri.go:89] found id: ""
	I0815 01:32:19.228886   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.228899   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:19.228919   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:19.228938   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:19.281284   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:19.281320   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:19.294742   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:19.294771   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:19.364684   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:19.364708   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:19.364722   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:19.451057   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:19.451092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:15.261508   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:17.261956   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:19.760608   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:16.312855   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:18.811382   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:20.251898   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:22.252179   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:24.252312   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:21.989302   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:22.002691   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:22.002755   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:22.037079   66919 cri.go:89] found id: ""
	I0815 01:32:22.037101   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.037109   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:22.037115   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:22.037162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:22.069804   66919 cri.go:89] found id: ""
	I0815 01:32:22.069833   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.069842   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:22.069848   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:22.069919   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.102474   66919 cri.go:89] found id: ""
	I0815 01:32:22.102503   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.102515   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:22.102523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:22.102587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:22.137416   66919 cri.go:89] found id: ""
	I0815 01:32:22.137442   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.137449   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:22.137454   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:22.137511   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:22.171153   66919 cri.go:89] found id: ""
	I0815 01:32:22.171182   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.171191   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:22.171198   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:22.171259   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:22.207991   66919 cri.go:89] found id: ""
	I0815 01:32:22.208020   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.208029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:22.208038   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:22.208111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:22.245727   66919 cri.go:89] found id: ""
	I0815 01:32:22.245757   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.245767   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:22.245774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:22.245838   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:22.284478   66919 cri.go:89] found id: ""
	I0815 01:32:22.284502   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.284510   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:22.284518   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:22.284529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:22.297334   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:22.297378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:22.369318   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:22.369342   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:22.369356   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:22.445189   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:22.445226   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:22.486563   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:22.486592   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.037875   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:25.051503   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:25.051580   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:25.090579   66919 cri.go:89] found id: ""
	I0815 01:32:25.090610   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.090622   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:25.090629   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:25.090691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:25.123683   66919 cri.go:89] found id: ""
	I0815 01:32:25.123711   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.123722   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:25.123729   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:25.123790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.261478   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:24.760607   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:20.812971   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:23.311523   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:25.313928   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:26.752024   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.252947   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:25.155715   66919 cri.go:89] found id: ""
	I0815 01:32:25.155744   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.155752   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:25.155757   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:25.155806   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:25.186654   66919 cri.go:89] found id: ""
	I0815 01:32:25.186680   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.186688   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:25.186694   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:25.186741   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:25.218636   66919 cri.go:89] found id: ""
	I0815 01:32:25.218665   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.218674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:25.218679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:25.218729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:25.250018   66919 cri.go:89] found id: ""
	I0815 01:32:25.250046   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.250116   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:25.250147   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:25.250219   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:25.283374   66919 cri.go:89] found id: ""
	I0815 01:32:25.283403   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.283413   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:25.283420   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:25.283483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:25.315240   66919 cri.go:89] found id: ""
	I0815 01:32:25.315260   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.315267   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:25.315274   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:25.315286   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.367212   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:25.367243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:25.380506   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:25.380531   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:25.441106   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:25.441129   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:25.441145   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:25.522791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:25.522828   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.061984   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:28.075091   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:28.075149   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:28.110375   66919 cri.go:89] found id: ""
	I0815 01:32:28.110407   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.110419   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:28.110426   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:28.110490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:28.146220   66919 cri.go:89] found id: ""
	I0815 01:32:28.146249   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.146258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:28.146264   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:28.146317   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:28.177659   66919 cri.go:89] found id: ""
	I0815 01:32:28.177691   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.177702   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:28.177708   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:28.177776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:28.209729   66919 cri.go:89] found id: ""
	I0815 01:32:28.209759   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.209768   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:28.209775   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:28.209835   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:28.241605   66919 cri.go:89] found id: ""
	I0815 01:32:28.241633   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.241642   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:28.241646   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:28.241706   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:28.276697   66919 cri.go:89] found id: ""
	I0815 01:32:28.276722   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.276730   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:28.276735   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:28.276785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:28.309109   66919 cri.go:89] found id: ""
	I0815 01:32:28.309134   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.309144   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:28.309151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:28.309213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:28.348262   66919 cri.go:89] found id: ""
	I0815 01:32:28.348289   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.348303   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:28.348315   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:28.348329   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.387270   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:28.387296   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:28.440454   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:28.440504   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:28.453203   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:28.453233   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:28.523080   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:28.523106   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:28.523123   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:26.761742   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.261323   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:27.812457   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.812954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:31.253078   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:33.755301   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:31.098144   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:31.111396   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:31.111469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:31.143940   66919 cri.go:89] found id: ""
	I0815 01:32:31.143969   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.143977   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:31.143983   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:31.144038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:31.175393   66919 cri.go:89] found id: ""
	I0815 01:32:31.175421   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.175439   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:31.175447   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:31.175509   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:31.213955   66919 cri.go:89] found id: ""
	I0815 01:32:31.213984   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.213993   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:31.213998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:31.214047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:31.245836   66919 cri.go:89] found id: ""
	I0815 01:32:31.245861   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.245868   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:31.245873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:31.245936   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:31.279290   66919 cri.go:89] found id: ""
	I0815 01:32:31.279317   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.279327   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:31.279334   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:31.279408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:31.313898   66919 cri.go:89] found id: ""
	I0815 01:32:31.313926   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.313937   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:31.313944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:31.314020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:31.344466   66919 cri.go:89] found id: ""
	I0815 01:32:31.344502   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.344513   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:31.344521   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:31.344586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:31.375680   66919 cri.go:89] found id: ""
	I0815 01:32:31.375709   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.375721   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:31.375732   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:31.375747   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:31.457005   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:31.457048   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.494656   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:31.494691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:31.546059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:31.546096   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:31.559523   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:31.559553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:31.628402   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.128980   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:34.142151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:34.142216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:34.189425   66919 cri.go:89] found id: ""
	I0815 01:32:34.189453   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.189464   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:34.189470   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:34.189533   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:34.222360   66919 cri.go:89] found id: ""
	I0815 01:32:34.222385   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.222392   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:34.222398   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:34.222453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:34.256275   66919 cri.go:89] found id: ""
	I0815 01:32:34.256302   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.256314   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:34.256322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:34.256387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:34.294104   66919 cri.go:89] found id: ""
	I0815 01:32:34.294130   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.294137   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:34.294143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:34.294214   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:34.330163   66919 cri.go:89] found id: ""
	I0815 01:32:34.330193   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.330205   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:34.330213   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:34.330278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:34.363436   66919 cri.go:89] found id: ""
	I0815 01:32:34.363464   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.363475   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:34.363483   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:34.363540   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:34.399733   66919 cri.go:89] found id: ""
	I0815 01:32:34.399761   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.399772   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:34.399779   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:34.399832   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:34.433574   66919 cri.go:89] found id: ""
	I0815 01:32:34.433781   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.433804   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:34.433820   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:34.433839   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:34.488449   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:34.488496   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:34.502743   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:34.502776   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:34.565666   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.565701   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:34.565718   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:34.639463   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:34.639498   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.262299   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:33.760758   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:32.313372   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:34.812259   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:36.251156   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:38.252330   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:37.189617   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:37.202695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:37.202766   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:37.235556   66919 cri.go:89] found id: ""
	I0815 01:32:37.235589   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.235600   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:37.235608   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:37.235669   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:37.271110   66919 cri.go:89] found id: ""
	I0815 01:32:37.271139   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.271150   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:37.271158   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:37.271216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:37.304294   66919 cri.go:89] found id: ""
	I0815 01:32:37.304325   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.304332   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:37.304337   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:37.304398   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:37.337271   66919 cri.go:89] found id: ""
	I0815 01:32:37.337297   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.337309   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:37.337317   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:37.337377   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:37.373088   66919 cri.go:89] found id: ""
	I0815 01:32:37.373115   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.373126   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:37.373133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:37.373184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:37.407978   66919 cri.go:89] found id: ""
	I0815 01:32:37.408003   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.408011   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:37.408016   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:37.408065   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:37.441966   66919 cri.go:89] found id: ""
	I0815 01:32:37.441999   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.442009   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:37.442017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:37.442079   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:37.473670   66919 cri.go:89] found id: ""
	I0815 01:32:37.473699   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.473710   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:37.473720   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:37.473740   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:37.509174   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:37.509208   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:37.560059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:37.560099   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:37.574425   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:37.574458   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:37.639177   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:37.639199   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:37.639216   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:36.260796   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:38.261082   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:36.813759   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:39.312862   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:40.752526   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:43.251946   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:40.218504   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:40.231523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:40.231626   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:40.266065   66919 cri.go:89] found id: ""
	I0815 01:32:40.266092   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.266102   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:40.266109   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:40.266174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:40.298717   66919 cri.go:89] found id: ""
	I0815 01:32:40.298749   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.298759   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:40.298767   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:40.298821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:40.330633   66919 cri.go:89] found id: ""
	I0815 01:32:40.330660   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.330668   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:40.330674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:40.330738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:40.367840   66919 cri.go:89] found id: ""
	I0815 01:32:40.367866   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.367876   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:40.367884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:40.367953   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:40.403883   66919 cri.go:89] found id: ""
	I0815 01:32:40.403910   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.403921   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:40.403927   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:40.404001   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:40.433989   66919 cri.go:89] found id: ""
	I0815 01:32:40.434016   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.434029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:40.434036   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:40.434098   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:40.468173   66919 cri.go:89] found id: ""
	I0815 01:32:40.468202   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.468213   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:40.468220   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:40.468278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:40.502701   66919 cri.go:89] found id: ""
	I0815 01:32:40.502726   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.502737   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:40.502748   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:40.502772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:40.582716   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:40.582751   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:40.582766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:40.663875   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:40.663910   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.710394   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:40.710439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:40.763015   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:40.763044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.276542   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:43.289311   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:43.289375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:43.334368   66919 cri.go:89] found id: ""
	I0815 01:32:43.334398   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.334408   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:43.334416   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:43.334480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:43.367778   66919 cri.go:89] found id: ""
	I0815 01:32:43.367810   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.367821   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:43.367829   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:43.367890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:43.408036   66919 cri.go:89] found id: ""
	I0815 01:32:43.408060   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.408067   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:43.408072   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:43.408126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:43.442240   66919 cri.go:89] found id: ""
	I0815 01:32:43.442264   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.442276   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:43.442282   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:43.442366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:43.475071   66919 cri.go:89] found id: ""
	I0815 01:32:43.475103   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.475113   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:43.475123   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:43.475189   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:43.508497   66919 cri.go:89] found id: ""
	I0815 01:32:43.508526   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.508536   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:43.508543   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:43.508601   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:43.544292   66919 cri.go:89] found id: ""
	I0815 01:32:43.544315   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.544322   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:43.544328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:43.544390   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:43.582516   66919 cri.go:89] found id: ""
	I0815 01:32:43.582544   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.582556   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:43.582567   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:43.582583   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:43.633821   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:43.633853   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.647453   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:43.647478   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:43.715818   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:43.715839   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:43.715850   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:43.798131   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:43.798167   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.262028   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:42.262223   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:44.760964   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:41.813262   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:43.813491   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:45.751794   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:47.751852   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:49.752186   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:46.337867   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:46.364553   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:46.364629   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:46.426611   66919 cri.go:89] found id: ""
	I0815 01:32:46.426642   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.426654   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:46.426662   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:46.426724   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:46.461160   66919 cri.go:89] found id: ""
	I0815 01:32:46.461194   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.461201   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:46.461206   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:46.461262   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:46.492542   66919 cri.go:89] found id: ""
	I0815 01:32:46.492566   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.492576   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:46.492583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:46.492643   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:46.526035   66919 cri.go:89] found id: ""
	I0815 01:32:46.526060   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.526068   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:46.526075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:46.526131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:46.558867   66919 cri.go:89] found id: ""
	I0815 01:32:46.558895   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.558903   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:46.558909   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:46.558969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:46.593215   66919 cri.go:89] found id: ""
	I0815 01:32:46.593243   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.593258   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:46.593264   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:46.593345   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:46.626683   66919 cri.go:89] found id: ""
	I0815 01:32:46.626710   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.626720   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:46.626727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:46.626786   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:46.660687   66919 cri.go:89] found id: ""
	I0815 01:32:46.660716   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.660727   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:46.660738   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:46.660754   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:46.710639   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:46.710670   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:46.723378   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:46.723402   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:46.790906   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:46.790931   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:46.790946   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:46.876843   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:46.876877   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:49.421563   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:49.434606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:49.434688   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:49.468855   66919 cri.go:89] found id: ""
	I0815 01:32:49.468884   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.468895   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:49.468900   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:49.468958   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:49.507477   66919 cri.go:89] found id: ""
	I0815 01:32:49.507507   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.507519   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:49.507526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:49.507586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:49.539825   66919 cri.go:89] found id: ""
	I0815 01:32:49.539855   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.539866   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:49.539873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:49.539925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:49.570812   66919 cri.go:89] found id: ""
	I0815 01:32:49.570841   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.570851   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:49.570858   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:49.570910   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:49.604327   66919 cri.go:89] found id: ""
	I0815 01:32:49.604356   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.604367   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:49.604374   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:49.604456   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:49.640997   66919 cri.go:89] found id: ""
	I0815 01:32:49.641029   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.641042   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:49.641051   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:49.641116   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:49.673274   66919 cri.go:89] found id: ""
	I0815 01:32:49.673303   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.673314   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:49.673322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:49.673381   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:49.708863   66919 cri.go:89] found id: ""
	I0815 01:32:49.708890   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.708897   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:49.708905   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:49.708916   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:49.759404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:49.759431   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:49.773401   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:49.773429   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:49.842512   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:49.842539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:49.842557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:49.923996   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:49.924030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:46.760999   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:48.762058   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:46.312409   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:48.313081   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:51.752324   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:53.752358   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:52.459672   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:52.472149   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:52.472218   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:52.508168   66919 cri.go:89] found id: ""
	I0815 01:32:52.508193   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.508202   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:52.508207   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:52.508260   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:52.543741   66919 cri.go:89] found id: ""
	I0815 01:32:52.543770   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.543788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:52.543796   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:52.543850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:52.575833   66919 cri.go:89] found id: ""
	I0815 01:32:52.575865   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.575876   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:52.575883   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:52.575950   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:52.607593   66919 cri.go:89] found id: ""
	I0815 01:32:52.607627   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.607638   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:52.607645   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:52.607705   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:52.641726   66919 cri.go:89] found id: ""
	I0815 01:32:52.641748   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.641757   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:52.641763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:52.641820   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:52.673891   66919 cri.go:89] found id: ""
	I0815 01:32:52.673918   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.673926   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:52.673932   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:52.673989   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:52.705405   66919 cri.go:89] found id: ""
	I0815 01:32:52.705465   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.705479   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:52.705488   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:52.705683   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:52.739413   66919 cri.go:89] found id: ""
	I0815 01:32:52.739442   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.739455   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:52.739466   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:52.739481   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:52.791891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:52.791926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:52.806154   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:52.806184   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:52.871807   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:52.871833   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:52.871848   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:52.955257   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:52.955299   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:51.261339   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:53.760453   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:50.811954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:52.814155   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.315451   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.753146   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:58.251418   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.498326   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:55.511596   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:55.511674   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:55.545372   66919 cri.go:89] found id: ""
	I0815 01:32:55.545397   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.545405   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:55.545410   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:55.545469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:55.578661   66919 cri.go:89] found id: ""
	I0815 01:32:55.578687   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.578699   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:55.578706   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:55.578774   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:55.612071   66919 cri.go:89] found id: ""
	I0815 01:32:55.612096   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.612104   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:55.612109   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:55.612167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:55.647842   66919 cri.go:89] found id: ""
	I0815 01:32:55.647870   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.647879   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:55.647884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:55.647946   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:55.683145   66919 cri.go:89] found id: ""
	I0815 01:32:55.683171   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.683179   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:55.683185   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:55.683237   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:55.716485   66919 cri.go:89] found id: ""
	I0815 01:32:55.716513   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.716524   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:55.716529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:55.716588   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:55.751649   66919 cri.go:89] found id: ""
	I0815 01:32:55.751673   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.751681   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:55.751689   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:55.751748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:55.786292   66919 cri.go:89] found id: ""
	I0815 01:32:55.786322   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.786333   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:55.786345   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:55.786362   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:55.837633   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:55.837680   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:55.851624   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:55.851697   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:55.920496   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:55.920518   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:55.920532   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:55.998663   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:55.998700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:58.538202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:58.550630   66919 kubeadm.go:597] duration metric: took 4m4.454171061s to restartPrimaryControlPlane
	W0815 01:32:58.550719   66919 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:32:58.550763   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:32:55.760913   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:57.761301   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:57.812542   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:59.812797   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:00.251492   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.751937   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.968200   66919 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.417406165s)
	I0815 01:33:02.968273   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:02.984328   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:02.994147   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:03.003703   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:03.003745   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:03.003799   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:03.012560   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:03.012629   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:03.021480   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:03.030121   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:03.030185   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:03.039216   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.047790   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:03.047854   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.056508   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:03.065001   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:03.065059   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:03.073818   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:03.286102   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:33:00.260884   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.261081   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:04.261431   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.312430   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:04.811970   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:05.252564   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:07.751944   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:09.752232   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:06.262039   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:08.760900   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:06.812188   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:08.812782   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.752403   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:14.251873   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.261490   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:13.760541   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.312341   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:13.313036   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:16.252242   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:18.252528   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:15.761353   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:18.261298   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:15.812234   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:17.812936   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.312284   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.752195   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:23.253836   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.262317   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:22.760573   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:24.760639   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:22.812596   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:25.313723   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:25.751279   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.751900   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.260523   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:29.261069   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.314902   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:29.812210   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:30.306422   67000 pod_ready.go:81] duration metric: took 4m0.000133706s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" ...
	E0815 01:33:30.306452   67000 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 01:33:30.306487   67000 pod_ready.go:38] duration metric: took 4m9.54037853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:33:30.306516   67000 kubeadm.go:597] duration metric: took 4m18.620065579s to restartPrimaryControlPlane
	W0815 01:33:30.306585   67000 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:33:30.306616   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:33:30.251274   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:32.251733   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:34.261342   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:31.261851   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:33.760731   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:36.752156   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:39.251042   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:35.761425   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:38.260168   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:41.252730   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:43.751914   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:40.260565   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:42.261544   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:44.263225   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:45.752581   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:48.251003   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:46.760884   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:49.259955   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:50.251655   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:52.751031   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:52.751064   67451 pod_ready.go:81] duration metric: took 4m0.00559932s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	E0815 01:33:52.751076   67451 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 01:33:52.751088   67451 pod_ready.go:38] duration metric: took 4m2.403367614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:33:52.751108   67451 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:33:52.751143   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:33:52.751205   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:33:52.795646   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:52.795671   67451 cri.go:89] found id: ""
	I0815 01:33:52.795680   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:33:52.795738   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.800301   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:33:52.800378   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:33:52.832704   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:52.832723   67451 cri.go:89] found id: ""
	I0815 01:33:52.832731   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:33:52.832789   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.836586   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:33:52.836647   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:33:52.871782   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:52.871806   67451 cri.go:89] found id: ""
	I0815 01:33:52.871814   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:33:52.871865   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.875939   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:33:52.876003   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:33:52.911531   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:52.911559   67451 cri.go:89] found id: ""
	I0815 01:33:52.911568   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:33:52.911618   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.915944   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:33:52.916044   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:33:52.950344   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:52.950370   67451 cri.go:89] found id: ""
	I0815 01:33:52.950379   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:33:52.950429   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.954361   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:33:52.954423   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:33:52.988534   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:52.988560   67451 cri.go:89] found id: ""
	I0815 01:33:52.988568   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:33:52.988614   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.992310   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:33:52.992362   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:33:53.024437   67451 cri.go:89] found id: ""
	I0815 01:33:53.024464   67451 logs.go:276] 0 containers: []
	W0815 01:33:53.024472   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:33:53.024477   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:33:53.024540   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:33:53.065265   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:53.065294   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:53.065300   67451 cri.go:89] found id: ""
	I0815 01:33:53.065309   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:33:53.065371   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:53.069355   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:53.073218   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:33:53.073241   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:53.111718   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:33:53.111748   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:53.168887   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:33:53.168916   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:53.205011   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:33:53.205047   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:53.236754   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:33:53.236783   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:33:53.717444   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:33:53.717479   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:33:53.730786   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:33:53.730822   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:53.772883   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:33:53.772915   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:53.811011   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:33:53.811045   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:33:53.850482   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:33:53.850537   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:53.884061   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:33:53.884094   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:33:53.953586   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:33:53.953621   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:33:54.074305   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:33:54.074345   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:51.261543   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:53.761698   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:56.568636   67000 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.261991635s)
	I0815 01:33:56.568725   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:56.585102   67000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:56.595265   67000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:56.606275   67000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:56.606302   67000 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:56.606346   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:56.614847   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:56.614909   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:56.624087   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:56.635940   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:56.635996   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:56.648778   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:56.659984   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:56.660048   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:56.670561   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:56.680716   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:56.680770   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:56.691582   67000 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:56.744053   67000 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:33:56.744448   67000 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:33:56.859803   67000 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:33:56.859986   67000 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:33:56.860126   67000 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:33:56.870201   67000 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:33:56.872775   67000 out.go:204]   - Generating certificates and keys ...
	I0815 01:33:56.872875   67000 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:33:56.872957   67000 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:33:56.873055   67000 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:33:56.873134   67000 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:33:56.873222   67000 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:33:56.873302   67000 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:33:56.873391   67000 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:33:56.873474   67000 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:33:56.873577   67000 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:33:56.873686   67000 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:33:56.873745   67000 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:33:56.873823   67000 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:33:56.993607   67000 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:33:57.204419   67000 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:33:57.427518   67000 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:33:57.816802   67000 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:33:57.976885   67000 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:33:57.977545   67000 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:33:57.980898   67000 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:33:56.622543   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:33:56.645990   67451 api_server.go:72] duration metric: took 4m13.53998694s to wait for apiserver process to appear ...
	I0815 01:33:56.646016   67451 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:33:56.646059   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:33:56.646118   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:33:56.690122   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:56.690169   67451 cri.go:89] found id: ""
	I0815 01:33:56.690180   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:33:56.690253   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.694647   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:33:56.694702   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:33:56.732231   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:56.732269   67451 cri.go:89] found id: ""
	I0815 01:33:56.732279   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:33:56.732341   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.736567   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:33:56.736642   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:33:56.776792   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:56.776816   67451 cri.go:89] found id: ""
	I0815 01:33:56.776827   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:33:56.776886   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.781131   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:33:56.781200   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:33:56.814488   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:56.814514   67451 cri.go:89] found id: ""
	I0815 01:33:56.814524   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:33:56.814598   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.818456   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:33:56.818518   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:33:56.872968   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:56.872988   67451 cri.go:89] found id: ""
	I0815 01:33:56.872998   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:33:56.873059   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.877393   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:33:56.877459   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:33:56.918072   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:56.918169   67451 cri.go:89] found id: ""
	I0815 01:33:56.918185   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:33:56.918247   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.923442   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:33:56.923508   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:33:56.960237   67451 cri.go:89] found id: ""
	I0815 01:33:56.960263   67451 logs.go:276] 0 containers: []
	W0815 01:33:56.960271   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:33:56.960276   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:33:56.960339   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:33:56.995156   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:56.995184   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:56.995189   67451 cri.go:89] found id: ""
	I0815 01:33:56.995195   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:33:56.995253   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.999496   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:57.004450   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:33:57.004478   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:33:57.082294   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:33:57.082336   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:33:57.098629   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:33:57.098662   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:57.132282   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:33:57.132314   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:57.166448   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:33:57.166482   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:57.198997   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:33:57.199027   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:57.232713   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:33:57.232746   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:33:57.684565   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:33:57.684601   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:33:57.736700   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:33:57.736734   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:33:57.847294   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:33:57.847320   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:57.896696   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:33:57.896725   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:57.940766   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:33:57.940799   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:57.979561   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:33:57.979586   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:56.260814   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:58.760911   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:57.982527   67000 out.go:204]   - Booting up control plane ...
	I0815 01:33:57.982632   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:33:57.982740   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:33:57.982828   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:33:58.009596   67000 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:33:58.019089   67000 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:33:58.019165   67000 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:33:58.152279   67000 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:33:58.152459   67000 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:33:58.652446   67000 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.333422ms
	I0815 01:33:58.652548   67000 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:34:03.655057   67000 kubeadm.go:310] [api-check] The API server is healthy after 5.002436765s
	I0815 01:34:03.667810   67000 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:34:03.684859   67000 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:34:03.711213   67000 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:34:03.711523   67000 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-190398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:34:03.722147   67000 kubeadm.go:310] [bootstrap-token] Using token: rpl4uv.hjs6pd4939cxws48
	I0815 01:34:00.548574   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:34:00.554825   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 200:
	ok
	I0815 01:34:00.556191   67451 api_server.go:141] control plane version: v1.31.0
	I0815 01:34:00.556215   67451 api_server.go:131] duration metric: took 3.910191173s to wait for apiserver health ...
	I0815 01:34:00.556225   67451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:34:00.556253   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:34:00.556316   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:34:00.603377   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:34:00.603404   67451 cri.go:89] found id: ""
	I0815 01:34:00.603413   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:34:00.603471   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.608674   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:34:00.608747   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:34:00.660318   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:34:00.660346   67451 cri.go:89] found id: ""
	I0815 01:34:00.660355   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:34:00.660450   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.664411   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:34:00.664483   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:34:00.710148   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:34:00.710178   67451 cri.go:89] found id: ""
	I0815 01:34:00.710188   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:34:00.710255   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.714877   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:34:00.714936   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:34:00.750324   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:34:00.750352   67451 cri.go:89] found id: ""
	I0815 01:34:00.750361   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:34:00.750423   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.754304   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:34:00.754377   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:34:00.797956   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:34:00.797980   67451 cri.go:89] found id: ""
	I0815 01:34:00.797989   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:34:00.798053   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.802260   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:34:00.802362   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:34:00.841502   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:34:00.841529   67451 cri.go:89] found id: ""
	I0815 01:34:00.841539   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:34:00.841599   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.845398   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:34:00.845454   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:34:00.882732   67451 cri.go:89] found id: ""
	I0815 01:34:00.882769   67451 logs.go:276] 0 containers: []
	W0815 01:34:00.882779   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:34:00.882786   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:34:00.882855   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:34:00.924913   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:34:00.924942   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:34:00.924948   67451 cri.go:89] found id: ""
	I0815 01:34:00.924958   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:34:00.925019   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.929047   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.932838   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:34:00.932862   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:34:00.975515   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:34:00.975544   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:34:01.041578   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:34:01.041616   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:34:01.083548   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:34:01.083584   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:34:01.181982   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:34:01.182028   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:34:01.197180   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:34:01.197222   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:34:01.296173   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:34:01.296215   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:34:01.348591   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:34:01.348621   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:34:01.385258   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:34:01.385290   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:34:01.760172   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:34:01.760228   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:34:01.811334   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:34:01.811371   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:34:01.855563   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:34:01.855602   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:34:01.891834   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:34:01.891871   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:34:04.440542   67451 system_pods.go:59] 8 kube-system pods found
	I0815 01:34:04.440582   67451 system_pods.go:61] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running
	I0815 01:34:04.440590   67451 system_pods.go:61] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running
	I0815 01:34:04.440596   67451 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running
	I0815 01:34:04.440602   67451 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running
	I0815 01:34:04.440607   67451 system_pods.go:61] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:34:04.440612   67451 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running
	I0815 01:34:04.440622   67451 system_pods.go:61] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:04.440627   67451 system_pods.go:61] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:34:04.440636   67451 system_pods.go:74] duration metric: took 3.884405315s to wait for pod list to return data ...
	I0815 01:34:04.440643   67451 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:34:04.443705   67451 default_sa.go:45] found service account: "default"
	I0815 01:34:04.443728   67451 default_sa.go:55] duration metric: took 3.078997ms for default service account to be created ...
	I0815 01:34:04.443736   67451 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:34:04.451338   67451 system_pods.go:86] 8 kube-system pods found
	I0815 01:34:04.451370   67451 system_pods.go:89] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running
	I0815 01:34:04.451379   67451 system_pods.go:89] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running
	I0815 01:34:04.451386   67451 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running
	I0815 01:34:04.451394   67451 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running
	I0815 01:34:04.451401   67451 system_pods.go:89] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:34:04.451408   67451 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running
	I0815 01:34:04.451419   67451 system_pods.go:89] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:04.451430   67451 system_pods.go:89] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:34:04.451443   67451 system_pods.go:126] duration metric: took 7.701241ms to wait for k8s-apps to be running ...
	I0815 01:34:04.451455   67451 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:34:04.451507   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:04.468766   67451 system_svc.go:56] duration metric: took 17.300221ms WaitForService to wait for kubelet
	I0815 01:34:04.468801   67451 kubeadm.go:582] duration metric: took 4m21.362801315s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:34:04.468832   67451 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:34:04.472507   67451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:34:04.472531   67451 node_conditions.go:123] node cpu capacity is 2
	I0815 01:34:04.472542   67451 node_conditions.go:105] duration metric: took 3.704147ms to run NodePressure ...
	I0815 01:34:04.472565   67451 start.go:241] waiting for startup goroutines ...
	I0815 01:34:04.472575   67451 start.go:246] waiting for cluster config update ...
	I0815 01:34:04.472588   67451 start.go:255] writing updated cluster config ...
	I0815 01:34:04.472865   67451 ssh_runner.go:195] Run: rm -f paused
	I0815 01:34:04.527726   67451 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:34:04.529173   67451 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-018537" cluster and "default" namespace by default
	I0815 01:34:03.723380   67000 out.go:204]   - Configuring RBAC rules ...
	I0815 01:34:03.723547   67000 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:34:03.729240   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:34:03.737279   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:34:03.740490   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:34:03.747717   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:34:03.751107   67000 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:34:04.063063   67000 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:34:04.490218   67000 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:34:05.062068   67000 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:34:05.065926   67000 kubeadm.go:310] 
	I0815 01:34:05.065991   67000 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:34:05.066017   67000 kubeadm.go:310] 
	I0815 01:34:05.066103   67000 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:34:05.066114   67000 kubeadm.go:310] 
	I0815 01:34:05.066148   67000 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:34:05.066211   67000 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:34:05.066286   67000 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:34:05.066298   67000 kubeadm.go:310] 
	I0815 01:34:05.066368   67000 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:34:05.066377   67000 kubeadm.go:310] 
	I0815 01:34:05.066416   67000 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:34:05.066423   67000 kubeadm.go:310] 
	I0815 01:34:05.066499   67000 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:34:05.066602   67000 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:34:05.066692   67000 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:34:05.066699   67000 kubeadm.go:310] 
	I0815 01:34:05.066766   67000 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:34:05.066829   67000 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:34:05.066835   67000 kubeadm.go:310] 
	I0815 01:34:05.066958   67000 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rpl4uv.hjs6pd4939cxws48 \
	I0815 01:34:05.067094   67000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:34:05.067122   67000 kubeadm.go:310] 	--control-plane 
	I0815 01:34:05.067130   67000 kubeadm.go:310] 
	I0815 01:34:05.067246   67000 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:34:05.067257   67000 kubeadm.go:310] 
	I0815 01:34:05.067360   67000 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rpl4uv.hjs6pd4939cxws48 \
	I0815 01:34:05.067496   67000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:34:05.068747   67000 kubeadm.go:310] W0815 01:33:56.716635    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:05.069045   67000 kubeadm.go:310] W0815 01:33:56.717863    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:05.069191   67000 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:05.069220   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:34:05.069231   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:34:05.070969   67000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:34:00.761976   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:03.263360   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:05.072063   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:34:05.081962   67000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:34:05.106105   67000 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:34:05.106173   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:05.106224   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-190398 minikube.k8s.io/updated_at=2024_08_15T01_34_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=embed-certs-190398 minikube.k8s.io/primary=true
	I0815 01:34:05.282543   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:05.282564   67000 ops.go:34] apiserver oom_adj: -16
	I0815 01:34:05.783320   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:06.282990   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:06.782692   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:07.283083   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:07.783174   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:08.283580   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:08.783293   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:09.282718   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:09.384394   67000 kubeadm.go:1113] duration metric: took 4.278268585s to wait for elevateKubeSystemPrivileges
	I0815 01:34:09.384433   67000 kubeadm.go:394] duration metric: took 4m57.749730888s to StartCluster
	I0815 01:34:09.384454   67000 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:09.384550   67000 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:34:09.386694   67000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:09.386961   67000 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:34:09.387019   67000 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:34:09.387099   67000 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-190398"
	I0815 01:34:09.387109   67000 addons.go:69] Setting default-storageclass=true in profile "embed-certs-190398"
	I0815 01:34:09.387133   67000 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-190398"
	I0815 01:34:09.387144   67000 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-190398"
	W0815 01:34:09.387147   67000 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:34:09.387165   67000 addons.go:69] Setting metrics-server=true in profile "embed-certs-190398"
	I0815 01:34:09.387178   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.387189   67000 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:34:09.387205   67000 addons.go:234] Setting addon metrics-server=true in "embed-certs-190398"
	W0815 01:34:09.387216   67000 addons.go:243] addon metrics-server should already be in state true
	I0815 01:34:09.387253   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.387571   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387601   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.387577   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387681   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387729   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.387799   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.388556   67000 out.go:177] * Verifying Kubernetes components...
	I0815 01:34:09.389872   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:34:09.404358   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0815 01:34:09.404925   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.405016   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0815 01:34:09.405505   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.405526   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.405530   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.405878   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.405982   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.405993   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.406352   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.406418   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I0815 01:34:09.406460   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.406477   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.406755   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.406839   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.406876   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.407171   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.407189   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.407518   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.407712   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.412572   67000 addons.go:234] Setting addon default-storageclass=true in "embed-certs-190398"
	W0815 01:34:09.412597   67000 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:34:09.412626   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.413018   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.413049   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.427598   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0815 01:34:09.428087   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.428619   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.428645   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.429079   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.429290   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.430391   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34763
	I0815 01:34:09.430978   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.431199   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.431477   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.431489   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.431839   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.431991   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.433073   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0815 01:34:09.433473   67000 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:34:09.433726   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.433849   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.434259   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.434433   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.434786   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.434987   67000 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:09.435005   67000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:34:09.435026   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.435675   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.435700   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.435887   67000 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:34:05.760130   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:07.760774   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:09.762245   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:09.437621   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:34:09.437643   67000 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:34:09.437664   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.438723   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.439409   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.439431   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.439685   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.439898   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.440245   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.440419   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.440609   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.441353   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.441380   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.441558   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.441712   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.441859   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.441957   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.455864   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
	I0815 01:34:09.456238   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.456858   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.456878   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.457179   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.457413   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.459002   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.459268   67000 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:09.459282   67000 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:34:09.459296   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.461784   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.462170   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.462203   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.462317   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.462491   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.462631   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.462772   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.602215   67000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:34:09.621687   67000 node_ready.go:35] waiting up to 6m0s for node "embed-certs-190398" to be "Ready" ...
	I0815 01:34:09.635114   67000 node_ready.go:49] node "embed-certs-190398" has status "Ready":"True"
	I0815 01:34:09.635146   67000 node_ready.go:38] duration metric: took 13.422205ms for node "embed-certs-190398" to be "Ready" ...
	I0815 01:34:09.635169   67000 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:09.642293   67000 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:09.681219   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:34:09.681242   67000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:34:09.725319   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:34:09.725353   67000 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:34:09.725445   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:09.758901   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:09.758973   67000 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:34:09.809707   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:09.831765   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:10.013580   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.013607   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.013902   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:10.013933   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.013950   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.013968   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.013979   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.014212   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.014227   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.023286   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.023325   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.023618   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.023643   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.023655   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.121834   67000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.312088989s)
	I0815 01:34:11.121883   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.121896   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.122269   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.122304   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.122324   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.122340   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.122354   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.122588   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.122605   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.183170   67000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.351356186s)
	I0815 01:34:11.183232   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.183248   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.183588   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.183604   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.183608   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.183619   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.183627   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.183989   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.184017   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.184031   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.184053   67000 addons.go:475] Verifying addon metrics-server=true in "embed-certs-190398"
	I0815 01:34:11.186460   67000 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0815 01:34:12.261636   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.763849   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:11.187572   67000 addons.go:510] duration metric: took 1.800554463s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0815 01:34:11.653997   67000 pod_ready.go:102] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.149672   67000 pod_ready.go:102] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.652753   67000 pod_ready.go:92] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:14.652782   67000 pod_ready.go:81] duration metric: took 5.0104594s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:14.652794   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:16.662387   67000 pod_ready.go:102] pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:17.158847   67000 pod_ready.go:92] pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.158877   67000 pod_ready.go:81] duration metric: took 2.50607523s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.158895   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.163274   67000 pod_ready.go:92] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.163295   67000 pod_ready.go:81] duration metric: took 4.392165ms for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.163307   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7hfvr" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.167416   67000 pod_ready.go:92] pod "kube-proxy-7hfvr" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.167436   67000 pod_ready.go:81] duration metric: took 4.122023ms for pod "kube-proxy-7hfvr" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.167447   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.171559   67000 pod_ready.go:92] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.171578   67000 pod_ready.go:81] duration metric: took 4.12361ms for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.171587   67000 pod_ready.go:38] duration metric: took 7.536405023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:17.171605   67000 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:34:17.171665   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:34:17.187336   67000 api_server.go:72] duration metric: took 7.800338922s to wait for apiserver process to appear ...
	I0815 01:34:17.187359   67000 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:34:17.187379   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:34:17.191804   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0815 01:34:17.192705   67000 api_server.go:141] control plane version: v1.31.0
	I0815 01:34:17.192726   67000 api_server.go:131] duration metric: took 5.35969ms to wait for apiserver health ...
	I0815 01:34:17.192739   67000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:34:17.197588   67000 system_pods.go:59] 9 kube-system pods found
	I0815 01:34:17.197618   67000 system_pods.go:61] "coredns-6f6b679f8f-kmmdc" [455019d9-07b5-418e-8668-26272424e96c] Running
	I0815 01:34:17.197626   67000 system_pods.go:61] "coredns-6f6b679f8f-kx2xv" [81e26858-a527-4f0d-a7fd-e5c3f82b29bc] Running
	I0815 01:34:17.197632   67000 system_pods.go:61] "etcd-embed-certs-190398" [0767f386-4cff-4c02-9c5c-ec334dd15d3d] Running
	I0815 01:34:17.197638   67000 system_pods.go:61] "kube-apiserver-embed-certs-190398" [737db54b-50eb-4fea-93a0-7e95d645b77f] Running
	I0815 01:34:17.197644   67000 system_pods.go:61] "kube-controller-manager-embed-certs-190398" [4767eb26-47a6-4dfd-833a-a4e18a57cb7e] Running
	I0815 01:34:17.197649   67000 system_pods.go:61] "kube-proxy-7hfvr" [ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0] Running
	I0815 01:34:17.197655   67000 system_pods.go:61] "kube-scheduler-embed-certs-190398" [0ffcf10e-304e-4837-bd6f-c3b78193b378] Running
	I0815 01:34:17.197665   67000 system_pods.go:61] "metrics-server-6867b74b74-4ldv7" [ea1c5492-373d-445c-a135-b91569186449] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:17.197676   67000 system_pods.go:61] "storage-provisioner" [002656ed-b542-442d-9409-6f0b5cf557dc] Running
	I0815 01:34:17.197688   67000 system_pods.go:74] duration metric: took 4.940904ms to wait for pod list to return data ...
	I0815 01:34:17.197699   67000 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:34:17.200172   67000 default_sa.go:45] found service account: "default"
	I0815 01:34:17.200190   67000 default_sa.go:55] duration metric: took 2.484111ms for default service account to be created ...
	I0815 01:34:17.200198   67000 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:34:17.359981   67000 system_pods.go:86] 9 kube-system pods found
	I0815 01:34:17.360011   67000 system_pods.go:89] "coredns-6f6b679f8f-kmmdc" [455019d9-07b5-418e-8668-26272424e96c] Running
	I0815 01:34:17.360019   67000 system_pods.go:89] "coredns-6f6b679f8f-kx2xv" [81e26858-a527-4f0d-a7fd-e5c3f82b29bc] Running
	I0815 01:34:17.360025   67000 system_pods.go:89] "etcd-embed-certs-190398" [0767f386-4cff-4c02-9c5c-ec334dd15d3d] Running
	I0815 01:34:17.360030   67000 system_pods.go:89] "kube-apiserver-embed-certs-190398" [737db54b-50eb-4fea-93a0-7e95d645b77f] Running
	I0815 01:34:17.360036   67000 system_pods.go:89] "kube-controller-manager-embed-certs-190398" [4767eb26-47a6-4dfd-833a-a4e18a57cb7e] Running
	I0815 01:34:17.360042   67000 system_pods.go:89] "kube-proxy-7hfvr" [ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0] Running
	I0815 01:34:17.360047   67000 system_pods.go:89] "kube-scheduler-embed-certs-190398" [0ffcf10e-304e-4837-bd6f-c3b78193b378] Running
	I0815 01:34:17.360058   67000 system_pods.go:89] "metrics-server-6867b74b74-4ldv7" [ea1c5492-373d-445c-a135-b91569186449] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:17.360065   67000 system_pods.go:89] "storage-provisioner" [002656ed-b542-442d-9409-6f0b5cf557dc] Running
	I0815 01:34:17.360078   67000 system_pods.go:126] duration metric: took 159.873802ms to wait for k8s-apps to be running ...
	I0815 01:34:17.360091   67000 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:34:17.360143   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:17.374912   67000 system_svc.go:56] duration metric: took 14.811351ms WaitForService to wait for kubelet
	I0815 01:34:17.374948   67000 kubeadm.go:582] duration metric: took 7.987952187s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:34:17.374977   67000 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:34:17.557650   67000 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:34:17.557681   67000 node_conditions.go:123] node cpu capacity is 2
	I0815 01:34:17.557694   67000 node_conditions.go:105] duration metric: took 182.710819ms to run NodePressure ...
	I0815 01:34:17.557706   67000 start.go:241] waiting for startup goroutines ...
	I0815 01:34:17.557716   67000 start.go:246] waiting for cluster config update ...
	I0815 01:34:17.557728   67000 start.go:255] writing updated cluster config ...
	I0815 01:34:17.557999   67000 ssh_runner.go:195] Run: rm -f paused
	I0815 01:34:17.605428   67000 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:34:17.607344   67000 out.go:177] * Done! kubectl is now configured to use "embed-certs-190398" cluster and "default" namespace by default
	I0815 01:34:17.260406   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:19.260601   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:19.754935   66492 pod_ready.go:81] duration metric: took 4m0.000339545s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" ...
	E0815 01:34:19.754964   66492 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 01:34:19.754984   66492 pod_ready.go:38] duration metric: took 4m6.506948914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:19.755018   66492 kubeadm.go:597] duration metric: took 4m13.922875877s to restartPrimaryControlPlane
	W0815 01:34:19.755082   66492 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:34:19.755112   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:34:45.859009   66492 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.103872856s)
	I0815 01:34:45.859088   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:45.875533   66492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:34:45.885287   66492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:34:45.897067   66492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:34:45.897087   66492 kubeadm.go:157] found existing configuration files:
	
	I0815 01:34:45.897137   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:34:45.907073   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:34:45.907145   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:34:45.916110   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:34:45.925269   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:34:45.925330   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:34:45.934177   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:34:45.942464   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:34:45.942524   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:34:45.951504   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:34:45.961107   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:34:45.961159   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:34:45.970505   66492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:34:46.018530   66492 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:34:46.018721   66492 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:46.125710   66492 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:46.125846   66492 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:46.125961   66492 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:34:46.134089   66492 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:46.135965   66492 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:46.136069   66492 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:46.136157   66492 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:46.136256   66492 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:46.136333   66492 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:46.136442   66492 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:46.136528   66492 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:46.136614   66492 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:46.136736   66492 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:46.136845   66492 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:46.136946   66492 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:46.137066   66492 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:46.137143   66492 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:46.289372   66492 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:46.547577   66492 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:34:46.679039   66492 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:47.039625   66492 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:47.355987   66492 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:47.356514   66492 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:47.359155   66492 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:47.360813   66492 out.go:204]   - Booting up control plane ...
	I0815 01:34:47.360924   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:47.361018   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:47.361140   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:47.386603   66492 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:47.395339   66492 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:47.395391   66492 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:47.526381   66492 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:34:47.526512   66492 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:34:48.027552   66492 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.152677ms
	I0815 01:34:48.027674   66492 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:34:53.029526   66492 kubeadm.go:310] [api-check] The API server is healthy after 5.001814093s
	I0815 01:34:53.043123   66492 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:34:53.061171   66492 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:34:53.093418   66492 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:34:53.093680   66492 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-884893 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:34:53.106103   66492 kubeadm.go:310] [bootstrap-token] Using token: rd520d.rc6325cjita43il4
	I0815 01:34:53.107576   66492 out.go:204]   - Configuring RBAC rules ...
	I0815 01:34:53.107717   66492 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:34:53.112060   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:34:53.122816   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:34:53.126197   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:34:53.129304   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:34:53.133101   66492 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:34:53.436427   66492 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:34:53.891110   66492 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:34:54.439955   66492 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:34:54.441369   66492 kubeadm.go:310] 
	I0815 01:34:54.441448   66492 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:34:54.441457   66492 kubeadm.go:310] 
	I0815 01:34:54.441550   66492 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:34:54.441578   66492 kubeadm.go:310] 
	I0815 01:34:54.441608   66492 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:34:54.441663   66492 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:34:54.441705   66492 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:34:54.441711   66492 kubeadm.go:310] 
	I0815 01:34:54.441777   66492 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:34:54.441784   66492 kubeadm.go:310] 
	I0815 01:34:54.441821   66492 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:34:54.441828   66492 kubeadm.go:310] 
	I0815 01:34:54.441867   66492 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:34:54.441977   66492 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:34:54.442054   66492 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:34:54.442061   66492 kubeadm.go:310] 
	I0815 01:34:54.442149   66492 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:34:54.442255   66492 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:34:54.442265   66492 kubeadm.go:310] 
	I0815 01:34:54.442384   66492 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rd520d.rc6325cjita43il4 \
	I0815 01:34:54.442477   66492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:34:54.442504   66492 kubeadm.go:310] 	--control-plane 
	I0815 01:34:54.442509   66492 kubeadm.go:310] 
	I0815 01:34:54.442591   66492 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:34:54.442598   66492 kubeadm.go:310] 
	I0815 01:34:54.442675   66492 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rd520d.rc6325cjita43il4 \
	I0815 01:34:54.442811   66492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:34:54.444409   66492 kubeadm.go:310] W0815 01:34:45.989583    3035 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:54.444785   66492 kubeadm.go:310] W0815 01:34:45.990491    3035 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:54.444929   66492 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:54.444951   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:34:54.444960   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:34:54.447029   66492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:34:54.448357   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:34:54.460176   66492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:34:54.479219   66492 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:34:54.479299   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:54.479342   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-884893 minikube.k8s.io/updated_at=2024_08_15T01_34_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=no-preload-884893 minikube.k8s.io/primary=true
	I0815 01:34:54.516528   66492 ops.go:34] apiserver oom_adj: -16
	I0815 01:34:54.686689   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:55.186918   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:55.687118   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:56.186740   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:56.687051   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:57.187582   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:57.687662   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:58.187633   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:58.686885   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:59.187093   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:59.280930   66492 kubeadm.go:1113] duration metric: took 4.801695567s to wait for elevateKubeSystemPrivileges
	I0815 01:34:59.280969   66492 kubeadm.go:394] duration metric: took 4m53.494095639s to StartCluster
	I0815 01:34:59.281006   66492 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:59.281099   66492 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:34:59.283217   66492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:59.283528   66492 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:34:59.283693   66492 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:34:59.283649   66492 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:34:59.283734   66492 addons.go:69] Setting storage-provisioner=true in profile "no-preload-884893"
	I0815 01:34:59.283743   66492 addons.go:69] Setting metrics-server=true in profile "no-preload-884893"
	I0815 01:34:59.283742   66492 addons.go:69] Setting default-storageclass=true in profile "no-preload-884893"
	I0815 01:34:59.283768   66492 addons.go:234] Setting addon metrics-server=true in "no-preload-884893"
	I0815 01:34:59.283770   66492 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-884893"
	I0815 01:34:59.283768   66492 addons.go:234] Setting addon storage-provisioner=true in "no-preload-884893"
	W0815 01:34:59.283882   66492 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:34:59.283912   66492 host.go:66] Checking if "no-preload-884893" exists ...
	W0815 01:34:59.283778   66492 addons.go:243] addon metrics-server should already be in state true
	I0815 01:34:59.283990   66492 host.go:66] Checking if "no-preload-884893" exists ...
	I0815 01:34:59.284206   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284238   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.284296   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284321   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.284333   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284347   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.285008   66492 out.go:177] * Verifying Kubernetes components...
	I0815 01:34:59.286336   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:34:59.302646   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I0815 01:34:59.302810   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0815 01:34:59.303084   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303243   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303327   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0815 01:34:59.303705   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.303724   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.303864   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303911   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.303939   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.304044   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304378   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.304397   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.304418   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304643   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.304695   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.304899   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.304912   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304926   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.305098   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.308826   66492 addons.go:234] Setting addon default-storageclass=true in "no-preload-884893"
	W0815 01:34:59.308848   66492 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:34:59.308878   66492 host.go:66] Checking if "no-preload-884893" exists ...
	I0815 01:34:59.309223   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.309255   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.320605   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0815 01:34:59.321021   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.321570   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.321591   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.321942   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.322163   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.323439   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0815 01:34:59.323779   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.324027   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.324168   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.324180   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.324446   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.324885   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.324914   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.325881   66492 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:34:59.326695   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0815 01:34:59.327054   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.327257   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:34:59.327286   66492 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:34:59.327304   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.327551   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.327567   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.327935   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.328243   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.330384   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.330975   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.331491   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.331519   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.331747   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.331916   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.331916   66492 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:34:59.563745   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:34:59.563904   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:34:59.565631   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:34:59.565711   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:59.565827   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:59.565968   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:59.566095   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:34:59.566195   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:59.567850   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:59.567922   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:59.567991   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:59.568091   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:59.568176   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:59.568283   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:59.568377   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:59.568466   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:59.568558   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:59.568674   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:59.568775   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:59.568834   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:59.568920   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:59.568998   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:59.569073   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:59.569162   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:59.569217   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:59.569330   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:59.569429   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:59.569482   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:59.569580   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:59.571031   66919 out.go:204]   - Booting up control plane ...
	I0815 01:34:59.571120   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:59.571198   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:59.571286   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:59.571396   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:59.571643   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:34:59.571729   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:34:59.571830   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572069   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572172   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572422   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572540   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572814   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572913   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573155   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573252   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573474   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573484   66919 kubeadm.go:310] 
	I0815 01:34:59.573543   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:34:59.573601   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:34:59.573610   66919 kubeadm.go:310] 
	I0815 01:34:59.573667   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:34:59.573713   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:34:59.573862   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:34:59.573878   66919 kubeadm.go:310] 
	I0815 01:34:59.574000   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:34:59.574051   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:34:59.574099   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:34:59.574109   66919 kubeadm.go:310] 
	I0815 01:34:59.574262   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:34:59.574379   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:34:59.574387   66919 kubeadm.go:310] 
	I0815 01:34:59.574509   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:34:59.574646   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:34:59.574760   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:34:59.574862   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:34:59.574880   66919 kubeadm.go:310] 
	W0815 01:34:59.574991   66919 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 01:34:59.575044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:35:00.029701   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:00.047125   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:35:00.057309   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:35:00.057336   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:35:00.057396   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:35:00.066837   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:35:00.066901   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:35:00.076722   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:35:00.086798   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:35:00.086862   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:35:00.097486   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.109900   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:35:00.109981   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.122672   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:34:59.332080   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.332258   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.333212   66492 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:59.333230   66492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:34:59.333246   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.336201   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.336699   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.336761   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.336791   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.336965   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.337146   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.337319   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.343978   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42433
	I0815 01:34:59.344425   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.344992   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.345015   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.345400   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.345595   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.347262   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.347490   66492 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:59.347507   66492 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:34:59.347525   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.350390   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.350876   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.350899   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.351072   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.351243   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.351418   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.351543   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.471077   66492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:34:59.500097   66492 node_ready.go:35] waiting up to 6m0s for node "no-preload-884893" to be "Ready" ...
	I0815 01:34:59.509040   66492 node_ready.go:49] node "no-preload-884893" has status "Ready":"True"
	I0815 01:34:59.509063   66492 node_ready.go:38] duration metric: took 8.924177ms for node "no-preload-884893" to be "Ready" ...
	I0815 01:34:59.509075   66492 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:59.515979   66492 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:59.594834   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:34:59.594856   66492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:34:59.597457   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:59.603544   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:59.637080   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:34:59.637109   66492 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:34:59.683359   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:59.683388   66492 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:34:59.730096   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:35:00.403252   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403287   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403477   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403495   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403789   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.403829   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.403850   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403858   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.403868   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403876   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.403891   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403900   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.404115   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.404156   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.404158   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.404162   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.404177   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.404164   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.433823   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.433876   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.434285   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.434398   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.434420   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.674979   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.675008   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.675371   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.675395   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.675421   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.675434   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.675443   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.675706   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.675722   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.675733   66492 addons.go:475] Verifying addon metrics-server=true in "no-preload-884893"
	I0815 01:35:00.677025   66492 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 01:35:00.134512   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:35:00.134579   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:35:00.146901   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:35:00.384725   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:35:00.678492   66492 addons.go:510] duration metric: took 1.394848534s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 01:35:01.522738   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:04.022711   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:06.522906   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:08.523426   66492 pod_ready.go:92] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.523453   66492 pod_ready.go:81] duration metric: took 9.007444319s for pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.523465   66492 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.528447   66492 pod_ready.go:92] pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.528471   66492 pod_ready.go:81] duration metric: took 4.997645ms for pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.528480   66492 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.533058   66492 pod_ready.go:92] pod "etcd-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.533078   66492 pod_ready.go:81] duration metric: took 4.59242ms for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.533088   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.537231   66492 pod_ready.go:92] pod "kube-apiserver-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.537252   66492 pod_ready.go:81] duration metric: took 4.154988ms for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.537261   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.541819   66492 pod_ready.go:92] pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.541840   66492 pod_ready.go:81] duration metric: took 4.572636ms for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.541852   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dpggv" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.920356   66492 pod_ready.go:92] pod "kube-proxy-dpggv" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.920394   66492 pod_ready.go:81] duration metric: took 378.534331ms for pod "kube-proxy-dpggv" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.920407   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:09.320112   66492 pod_ready.go:92] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:09.320135   66492 pod_ready.go:81] duration metric: took 399.72085ms for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:09.320143   66492 pod_ready.go:38] duration metric: took 9.811056504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:35:09.320158   66492 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:35:09.320216   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:35:09.336727   66492 api_server.go:72] duration metric: took 10.053160882s to wait for apiserver process to appear ...
	I0815 01:35:09.336760   66492 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:35:09.336777   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:35:09.340897   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 200:
	ok
	I0815 01:35:09.341891   66492 api_server.go:141] control plane version: v1.31.0
	I0815 01:35:09.341911   66492 api_server.go:131] duration metric: took 5.145922ms to wait for apiserver health ...
	I0815 01:35:09.341919   66492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:35:09.523808   66492 system_pods.go:59] 9 kube-system pods found
	I0815 01:35:09.523839   66492 system_pods.go:61] "coredns-6f6b679f8f-srq48" [e9520ab8-24d6-410d-bcba-b59e91e817a9] Running
	I0815 01:35:09.523844   66492 system_pods.go:61] "coredns-6f6b679f8f-t77b6" [fcdf11ef-28a6-428c-b033-e29b51af8f0e] Running
	I0815 01:35:09.523848   66492 system_pods.go:61] "etcd-no-preload-884893" [fa960cfe-331d-4656-93e9-a58921bd62de] Running
	I0815 01:35:09.523851   66492 system_pods.go:61] "kube-apiserver-no-preload-884893" [7a8244fb-aa58-4e8e-957a-f3fbd388837b] Running
	I0815 01:35:09.523857   66492 system_pods.go:61] "kube-controller-manager-no-preload-884893" [0b6c5424-6fe4-42b6-b081-4409f90db35f] Running
	I0815 01:35:09.523860   66492 system_pods.go:61] "kube-proxy-dpggv" [55ef2a4b-a502-452d-a3bd-df1209ff247b] Running
	I0815 01:35:09.523863   66492 system_pods.go:61] "kube-scheduler-no-preload-884893" [cd295ee0-1897-4cd3-896d-09dd36842248] Running
	I0815 01:35:09.523871   66492 system_pods.go:61] "metrics-server-6867b74b74-w47b2" [7423be62-ae01-4b3f-9e24-049f4788f32f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:35:09.523875   66492 system_pods.go:61] "storage-provisioner" [b4cf6d02-281f-4fb5-9ff7-c36143d3af58] Running
	I0815 01:35:09.523883   66492 system_pods.go:74] duration metric: took 181.959474ms to wait for pod list to return data ...
	I0815 01:35:09.523892   66492 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:35:09.720531   66492 default_sa.go:45] found service account: "default"
	I0815 01:35:09.720565   66492 default_sa.go:55] duration metric: took 196.667806ms for default service account to be created ...
	I0815 01:35:09.720574   66492 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:35:09.923419   66492 system_pods.go:86] 9 kube-system pods found
	I0815 01:35:09.923454   66492 system_pods.go:89] "coredns-6f6b679f8f-srq48" [e9520ab8-24d6-410d-bcba-b59e91e817a9] Running
	I0815 01:35:09.923463   66492 system_pods.go:89] "coredns-6f6b679f8f-t77b6" [fcdf11ef-28a6-428c-b033-e29b51af8f0e] Running
	I0815 01:35:09.923471   66492 system_pods.go:89] "etcd-no-preload-884893" [fa960cfe-331d-4656-93e9-a58921bd62de] Running
	I0815 01:35:09.923477   66492 system_pods.go:89] "kube-apiserver-no-preload-884893" [7a8244fb-aa58-4e8e-957a-f3fbd388837b] Running
	I0815 01:35:09.923484   66492 system_pods.go:89] "kube-controller-manager-no-preload-884893" [0b6c5424-6fe4-42b6-b081-4409f90db35f] Running
	I0815 01:35:09.923490   66492 system_pods.go:89] "kube-proxy-dpggv" [55ef2a4b-a502-452d-a3bd-df1209ff247b] Running
	I0815 01:35:09.923494   66492 system_pods.go:89] "kube-scheduler-no-preload-884893" [cd295ee0-1897-4cd3-896d-09dd36842248] Running
	I0815 01:35:09.923502   66492 system_pods.go:89] "metrics-server-6867b74b74-w47b2" [7423be62-ae01-4b3f-9e24-049f4788f32f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:35:09.923509   66492 system_pods.go:89] "storage-provisioner" [b4cf6d02-281f-4fb5-9ff7-c36143d3af58] Running
	I0815 01:35:09.923524   66492 system_pods.go:126] duration metric: took 202.943928ms to wait for k8s-apps to be running ...
	I0815 01:35:09.923533   66492 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:35:09.923586   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:09.938893   66492 system_svc.go:56] duration metric: took 15.353021ms WaitForService to wait for kubelet
	I0815 01:35:09.938917   66492 kubeadm.go:582] duration metric: took 10.655355721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:35:09.938942   66492 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:35:10.120692   66492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:35:10.120717   66492 node_conditions.go:123] node cpu capacity is 2
	I0815 01:35:10.120728   66492 node_conditions.go:105] duration metric: took 181.7794ms to run NodePressure ...
	I0815 01:35:10.120739   66492 start.go:241] waiting for startup goroutines ...
	I0815 01:35:10.120746   66492 start.go:246] waiting for cluster config update ...
	I0815 01:35:10.120754   66492 start.go:255] writing updated cluster config ...
	I0815 01:35:10.121019   66492 ssh_runner.go:195] Run: rm -f paused
	I0815 01:35:10.172726   66492 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:35:10.174631   66492 out.go:177] * Done! kubectl is now configured to use "no-preload-884893" cluster and "default" namespace by default
	I0815 01:36:56.608471   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:36:56.608611   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:36:56.610133   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:36:56.610200   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:36:56.610290   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:36:56.610405   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:36:56.610524   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:36:56.610616   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:36:56.612092   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:36:56.612184   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:36:56.612246   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:36:56.612314   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:36:56.612371   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:36:56.612431   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:36:56.612482   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:36:56.612534   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:36:56.612585   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:36:56.612697   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:36:56.612796   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:36:56.612859   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:36:56.613044   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:36:56.613112   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:36:56.613157   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:36:56.613244   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:36:56.613322   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:36:56.613455   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:36:56.613565   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:36:56.613631   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:36:56.613729   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:36:56.615023   66919 out.go:204]   - Booting up control plane ...
	I0815 01:36:56.615129   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:36:56.615203   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:36:56.615260   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:36:56.615330   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:36:56.615485   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:36:56.615542   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:36:56.615620   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.615805   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.615892   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616085   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616149   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616297   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616355   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616555   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616646   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616833   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616842   66919 kubeadm.go:310] 
	I0815 01:36:56.616873   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:36:56.616905   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:36:56.616912   66919 kubeadm.go:310] 
	I0815 01:36:56.616939   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:36:56.616969   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:36:56.617073   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:36:56.617089   66919 kubeadm.go:310] 
	I0815 01:36:56.617192   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:36:56.617220   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:36:56.617255   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:36:56.617263   66919 kubeadm.go:310] 
	I0815 01:36:56.617393   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:36:56.617469   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:36:56.617478   66919 kubeadm.go:310] 
	I0815 01:36:56.617756   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:36:56.617889   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:36:56.617967   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:36:56.618057   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:36:56.618070   66919 kubeadm.go:310] 
	I0815 01:36:56.618125   66919 kubeadm.go:394] duration metric: took 8m2.571608887s to StartCluster
	I0815 01:36:56.618169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:36:56.618222   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:36:56.659324   66919 cri.go:89] found id: ""
	I0815 01:36:56.659353   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.659365   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:36:56.659372   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:36:56.659443   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:36:56.695979   66919 cri.go:89] found id: ""
	I0815 01:36:56.696003   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.696010   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:36:56.696015   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:36:56.696063   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:36:56.730063   66919 cri.go:89] found id: ""
	I0815 01:36:56.730092   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.730100   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:36:56.730106   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:36:56.730161   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:36:56.763944   66919 cri.go:89] found id: ""
	I0815 01:36:56.763969   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.763983   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:36:56.763988   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:36:56.764047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:36:56.798270   66919 cri.go:89] found id: ""
	I0815 01:36:56.798299   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.798307   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:36:56.798313   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:36:56.798366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:36:56.832286   66919 cri.go:89] found id: ""
	I0815 01:36:56.832318   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.832328   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:36:56.832335   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:36:56.832410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:36:56.866344   66919 cri.go:89] found id: ""
	I0815 01:36:56.866380   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.866390   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:36:56.866398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:36:56.866461   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:36:56.904339   66919 cri.go:89] found id: ""
	I0815 01:36:56.904366   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.904375   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:36:56.904387   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:36:56.904405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:36:56.982024   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:36:56.982045   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:36:56.982057   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:36:57.092250   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:36:57.092288   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:36:57.157548   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:36:57.157582   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:36:57.216511   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:36:57.216563   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 01:36:57.230210   66919 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 01:36:57.230256   66919 out.go:239] * 
	W0815 01:36:57.230316   66919 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.230347   66919 out.go:239] * 
	W0815 01:36:57.231157   66919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:36:57.234003   66919 out.go:177] 
	W0815 01:36:57.235088   66919 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.235127   66919 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 01:36:57.235146   66919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 01:36:57.236647   66919 out.go:177] 
	
	
	==> CRI-O <==
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.625486159Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686199625461660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0f2669b-cdea-4de2-b2d6-cc3506a655ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.625885179Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7d95d09-cda5-4c8e-a7e6-bd86ee0eae74 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.625938786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7d95d09-cda5-4c8e-a7e6-bd86ee0eae74 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.626143015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e19c80b54c6a0fd2f130825b9928566ec4fd02360f7e7ceb57baebfb1f9ecde,PodSandboxId:a4abbdaa7b4a0c842e57c82be8d4503fc493bce96faddb763843ba0bf9a357b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685651559623525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002656ed-b542-442d-9409-6f0b5cf557dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fb1c9d0ba32a174f8f16cbccccf67d7e40194387549b313dae172f2965ac24,PodSandboxId:d7842b9af2fc81c4cfd86863df726dd516c3a286d55de4b81bcc97c75b0ef314,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650875749000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kmmdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455019d9-07b5-418e-8668-26272424e96c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b2f2efc9842fc0d074aa5a2e643a0cc59b68f537e1d0edbee2d0002071469b,PodSandboxId:ef1cacc079024898b663785ed45bd67e3d403f843ba28e723bac34ecb06c1e55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650521129931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kx2xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
1e26858-a527-4f0d-a7fd-e5c3f82b29bc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fff97fba4f249d22ae559a3fe50e7b931e5c20404aaacbfc8a4ab2e147a813,PodSandboxId:7f51d493f991485a3a98e86d3318f6783185603ccb5420601701585a40ba4663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723685650232800684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7hfvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cc0a0d4ab8d0c4b6af0fba77cc19d18df1c7fa7512f15ed521c1dae749f1d5,PodSandboxId:e444cfa8d96893666e4d07795897e4f03dd209e3a155ff5c980d4b8dac072da1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685639098491745,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa694d4a407ca969c7c1a2b66f6084ee,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24478db2154093a3701e841c9781ce568f8451ca53aff1b1899a7ca2187aa73b,PodSandboxId:c8654873f01a7bdad8806c986f3bbfa3e89834113498f8a6a655d6a1fedd3dc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685639068548206,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f22f388fc823ef71b4e262d5d4490a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4506ce769245994e30842e485ac09f3de96303c68d5c1beaef90f8b8a35946,PodSandboxId:ef013eee580a23f2cb9ca6894d5744fa94096aa9045a555a4fcd71919b5e7243,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685639060416310,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa99cc6c43fc2f9a4455c9f2ed3323fea6bd332c4e85ee9fe56851a182d64b7,PodSandboxId:2c4b28379543a196b736544f05a44b70db699874afd9347ace82ae5157c8e4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685639013837650,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd60e54cffa9111f02db87b2ecb87f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293849baffb776e957f241f40b637fb7c4a81bf2aa9f5f1e804a2cef6a368813,PodSandboxId:e52b405d973349a960d80fff1f8cefe84e9ef89bea9f1bc3b7e2f5f6f8d2c7bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685355276954673,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7d95d09-cda5-4c8e-a7e6-bd86ee0eae74 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.660917048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcd63bda-9953-4a57-8e78-f844a32c3a98 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.661006462Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcd63bda-9953-4a57-8e78-f844a32c3a98 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.662429423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43ded004-cc37-4d51-a44f-22c74b90ba7d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.663336343Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686199663265148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43ded004-cc37-4d51-a44f-22c74b90ba7d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.664519093Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea8cd5e9-f4f7-4c63-b058-91a98c5524d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.664610567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea8cd5e9-f4f7-4c63-b058-91a98c5524d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.664831138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e19c80b54c6a0fd2f130825b9928566ec4fd02360f7e7ceb57baebfb1f9ecde,PodSandboxId:a4abbdaa7b4a0c842e57c82be8d4503fc493bce96faddb763843ba0bf9a357b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685651559623525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002656ed-b542-442d-9409-6f0b5cf557dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fb1c9d0ba32a174f8f16cbccccf67d7e40194387549b313dae172f2965ac24,PodSandboxId:d7842b9af2fc81c4cfd86863df726dd516c3a286d55de4b81bcc97c75b0ef314,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650875749000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kmmdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455019d9-07b5-418e-8668-26272424e96c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b2f2efc9842fc0d074aa5a2e643a0cc59b68f537e1d0edbee2d0002071469b,PodSandboxId:ef1cacc079024898b663785ed45bd67e3d403f843ba28e723bac34ecb06c1e55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650521129931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kx2xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
1e26858-a527-4f0d-a7fd-e5c3f82b29bc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fff97fba4f249d22ae559a3fe50e7b931e5c20404aaacbfc8a4ab2e147a813,PodSandboxId:7f51d493f991485a3a98e86d3318f6783185603ccb5420601701585a40ba4663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723685650232800684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7hfvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cc0a0d4ab8d0c4b6af0fba77cc19d18df1c7fa7512f15ed521c1dae749f1d5,PodSandboxId:e444cfa8d96893666e4d07795897e4f03dd209e3a155ff5c980d4b8dac072da1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685639098491745,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa694d4a407ca969c7c1a2b66f6084ee,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24478db2154093a3701e841c9781ce568f8451ca53aff1b1899a7ca2187aa73b,PodSandboxId:c8654873f01a7bdad8806c986f3bbfa3e89834113498f8a6a655d6a1fedd3dc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685639068548206,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f22f388fc823ef71b4e262d5d4490a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4506ce769245994e30842e485ac09f3de96303c68d5c1beaef90f8b8a35946,PodSandboxId:ef013eee580a23f2cb9ca6894d5744fa94096aa9045a555a4fcd71919b5e7243,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685639060416310,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa99cc6c43fc2f9a4455c9f2ed3323fea6bd332c4e85ee9fe56851a182d64b7,PodSandboxId:2c4b28379543a196b736544f05a44b70db699874afd9347ace82ae5157c8e4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685639013837650,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd60e54cffa9111f02db87b2ecb87f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293849baffb776e957f241f40b637fb7c4a81bf2aa9f5f1e804a2cef6a368813,PodSandboxId:e52b405d973349a960d80fff1f8cefe84e9ef89bea9f1bc3b7e2f5f6f8d2c7bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685355276954673,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea8cd5e9-f4f7-4c63-b058-91a98c5524d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.706841401Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=158b9404-22c5-4068-b76c-d191a3df50bb name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.706957678Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=158b9404-22c5-4068-b76c-d191a3df50bb name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.708153085Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f94c995-e5bf-4019-9cb4-7c652bcdff1f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.708727326Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686199708700950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f94c995-e5bf-4019-9cb4-7c652bcdff1f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.709838099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26da6c3f-7cec-498d-8ce5-40fd0830eb6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.709917497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26da6c3f-7cec-498d-8ce5-40fd0830eb6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.710226961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e19c80b54c6a0fd2f130825b9928566ec4fd02360f7e7ceb57baebfb1f9ecde,PodSandboxId:a4abbdaa7b4a0c842e57c82be8d4503fc493bce96faddb763843ba0bf9a357b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685651559623525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002656ed-b542-442d-9409-6f0b5cf557dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fb1c9d0ba32a174f8f16cbccccf67d7e40194387549b313dae172f2965ac24,PodSandboxId:d7842b9af2fc81c4cfd86863df726dd516c3a286d55de4b81bcc97c75b0ef314,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650875749000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kmmdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455019d9-07b5-418e-8668-26272424e96c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b2f2efc9842fc0d074aa5a2e643a0cc59b68f537e1d0edbee2d0002071469b,PodSandboxId:ef1cacc079024898b663785ed45bd67e3d403f843ba28e723bac34ecb06c1e55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650521129931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kx2xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
1e26858-a527-4f0d-a7fd-e5c3f82b29bc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fff97fba4f249d22ae559a3fe50e7b931e5c20404aaacbfc8a4ab2e147a813,PodSandboxId:7f51d493f991485a3a98e86d3318f6783185603ccb5420601701585a40ba4663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723685650232800684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7hfvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cc0a0d4ab8d0c4b6af0fba77cc19d18df1c7fa7512f15ed521c1dae749f1d5,PodSandboxId:e444cfa8d96893666e4d07795897e4f03dd209e3a155ff5c980d4b8dac072da1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685639098491745,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa694d4a407ca969c7c1a2b66f6084ee,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24478db2154093a3701e841c9781ce568f8451ca53aff1b1899a7ca2187aa73b,PodSandboxId:c8654873f01a7bdad8806c986f3bbfa3e89834113498f8a6a655d6a1fedd3dc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685639068548206,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f22f388fc823ef71b4e262d5d4490a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4506ce769245994e30842e485ac09f3de96303c68d5c1beaef90f8b8a35946,PodSandboxId:ef013eee580a23f2cb9ca6894d5744fa94096aa9045a555a4fcd71919b5e7243,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685639060416310,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa99cc6c43fc2f9a4455c9f2ed3323fea6bd332c4e85ee9fe56851a182d64b7,PodSandboxId:2c4b28379543a196b736544f05a44b70db699874afd9347ace82ae5157c8e4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685639013837650,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd60e54cffa9111f02db87b2ecb87f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293849baffb776e957f241f40b637fb7c4a81bf2aa9f5f1e804a2cef6a368813,PodSandboxId:e52b405d973349a960d80fff1f8cefe84e9ef89bea9f1bc3b7e2f5f6f8d2c7bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685355276954673,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26da6c3f-7cec-498d-8ce5-40fd0830eb6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.744856491Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc038386-9eba-4f72-ad49-93c6e7020ace name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.744945020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc038386-9eba-4f72-ad49-93c6e7020ace name=/runtime.v1.RuntimeService/Version
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.745955578Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6ce3c0a-2367-4c84-97dd-ec6226d5bfe4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.746435124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686199746409781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6ce3c0a-2367-4c84-97dd-ec6226d5bfe4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.746877881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2ac9039-b5c3-40ec-9d29-713ad380d468 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.746934185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2ac9039-b5c3-40ec-9d29-713ad380d468 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:43:19 embed-certs-190398 crio[720]: time="2024-08-15 01:43:19.747145227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e19c80b54c6a0fd2f130825b9928566ec4fd02360f7e7ceb57baebfb1f9ecde,PodSandboxId:a4abbdaa7b4a0c842e57c82be8d4503fc493bce96faddb763843ba0bf9a357b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685651559623525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002656ed-b542-442d-9409-6f0b5cf557dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fb1c9d0ba32a174f8f16cbccccf67d7e40194387549b313dae172f2965ac24,PodSandboxId:d7842b9af2fc81c4cfd86863df726dd516c3a286d55de4b81bcc97c75b0ef314,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650875749000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kmmdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455019d9-07b5-418e-8668-26272424e96c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b2f2efc9842fc0d074aa5a2e643a0cc59b68f537e1d0edbee2d0002071469b,PodSandboxId:ef1cacc079024898b663785ed45bd67e3d403f843ba28e723bac34ecb06c1e55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650521129931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kx2xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
1e26858-a527-4f0d-a7fd-e5c3f82b29bc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fff97fba4f249d22ae559a3fe50e7b931e5c20404aaacbfc8a4ab2e147a813,PodSandboxId:7f51d493f991485a3a98e86d3318f6783185603ccb5420601701585a40ba4663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723685650232800684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7hfvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cc0a0d4ab8d0c4b6af0fba77cc19d18df1c7fa7512f15ed521c1dae749f1d5,PodSandboxId:e444cfa8d96893666e4d07795897e4f03dd209e3a155ff5c980d4b8dac072da1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685639098491745,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa694d4a407ca969c7c1a2b66f6084ee,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24478db2154093a3701e841c9781ce568f8451ca53aff1b1899a7ca2187aa73b,PodSandboxId:c8654873f01a7bdad8806c986f3bbfa3e89834113498f8a6a655d6a1fedd3dc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685639068548206,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f22f388fc823ef71b4e262d5d4490a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4506ce769245994e30842e485ac09f3de96303c68d5c1beaef90f8b8a35946,PodSandboxId:ef013eee580a23f2cb9ca6894d5744fa94096aa9045a555a4fcd71919b5e7243,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685639060416310,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa99cc6c43fc2f9a4455c9f2ed3323fea6bd332c4e85ee9fe56851a182d64b7,PodSandboxId:2c4b28379543a196b736544f05a44b70db699874afd9347ace82ae5157c8e4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685639013837650,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd60e54cffa9111f02db87b2ecb87f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293849baffb776e957f241f40b637fb7c4a81bf2aa9f5f1e804a2cef6a368813,PodSandboxId:e52b405d973349a960d80fff1f8cefe84e9ef89bea9f1bc3b7e2f5f6f8d2c7bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685355276954673,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2ac9039-b5c3-40ec-9d29-713ad380d468 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e19c80b54c6a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   a4abbdaa7b4a0       storage-provisioner
	d5fb1c9d0ba32       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   d7842b9af2fc8       coredns-6f6b679f8f-kmmdc
	f1b2f2efc9842       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   ef1cacc079024       coredns-6f6b679f8f-kx2xv
	31fff97fba4f2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   7f51d493f9914       kube-proxy-7hfvr
	18cc0a0d4ab8d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   e444cfa8d9689       kube-scheduler-embed-certs-190398
	24478db215409       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   c8654873f01a7       etcd-embed-certs-190398
	fb4506ce76924       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   ef013eee580a2       kube-apiserver-embed-certs-190398
	1aa99cc6c43fc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   2c4b28379543a       kube-controller-manager-embed-certs-190398
	293849baffb77       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   e52b405d97334       kube-apiserver-embed-certs-190398
	
	
	==> coredns [d5fb1c9d0ba32a174f8f16cbccccf67d7e40194387549b313dae172f2965ac24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f1b2f2efc9842fc0d074aa5a2e643a0cc59b68f537e1d0edbee2d0002071469b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-190398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-190398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=embed-certs-190398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T01_34_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 01:34:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-190398
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:43:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 01:39:20 +0000   Thu, 15 Aug 2024 01:33:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 01:39:20 +0000   Thu, 15 Aug 2024 01:33:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 01:39:20 +0000   Thu, 15 Aug 2024 01:33:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 01:39:20 +0000   Thu, 15 Aug 2024 01:34:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.151
	  Hostname:    embed-certs-190398
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8eb300ebe3644369a5de316135d838a7
	  System UUID:                8eb300eb-e364-4369-a5de-316135d838a7
	  Boot ID:                    98d434e5-9be9-4d3f-841e-aeb76a80c23a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-kmmdc                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m11s
	  kube-system                 coredns-6f6b679f8f-kx2xv                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m11s
	  kube-system                 etcd-embed-certs-190398                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-apiserver-embed-certs-190398             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-controller-manager-embed-certs-190398    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-proxy-7hfvr                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	  kube-system                 kube-scheduler-embed-certs-190398             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 metrics-server-6867b74b74-4ldv7               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m10s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node embed-certs-190398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node embed-certs-190398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node embed-certs-190398 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s                  kubelet          Node embed-certs-190398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s                  kubelet          Node embed-certs-190398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s                  kubelet          Node embed-certs-190398 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s                  node-controller  Node embed-certs-190398 event: Registered Node embed-certs-190398 in Controller
	
	
	==> dmesg <==
	[  +0.058561] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037758] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.884646] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.832383] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.537949] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug15 01:29] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.054020] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068556] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.182072] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.139307] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.306934] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[  +4.030895] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +2.063977] systemd-fstab-generator[924]: Ignoring "noauto" option for root device
	[  +0.059725] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.532942] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.449354] kauditd_printk_skb: 85 callbacks suppressed
	[Aug15 01:33] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.559129] systemd-fstab-generator[2578]: Ignoring "noauto" option for root device
	[Aug15 01:34] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.647430] systemd-fstab-generator[2900]: Ignoring "noauto" option for root device
	[  +5.370814] systemd-fstab-generator[3012]: Ignoring "noauto" option for root device
	[  +0.090609] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.458787] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [24478db2154093a3701e841c9781ce568f8451ca53aff1b1899a7ca2187aa73b] <==
	{"level":"info","ts":"2024-08-15T01:33:59.378281Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T01:33:59.378434Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.151:2380"}
	{"level":"info","ts":"2024-08-15T01:33:59.378479Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.151:2380"}
	{"level":"info","ts":"2024-08-15T01:33:59.385602Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"cec33aa8f0724833","initial-advertise-peer-urls":["https://192.168.72.151:2380"],"listen-peer-urls":["https://192.168.72.151:2380"],"advertise-client-urls":["https://192.168.72.151:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.151:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T01:33:59.385650Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T01:33:59.541243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cec33aa8f0724833 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-15T01:33:59.541290Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cec33aa8f0724833 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-15T01:33:59.541328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cec33aa8f0724833 received MsgPreVoteResp from cec33aa8f0724833 at term 1"}
	{"level":"info","ts":"2024-08-15T01:33:59.541341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cec33aa8f0724833 became candidate at term 2"}
	{"level":"info","ts":"2024-08-15T01:33:59.541346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cec33aa8f0724833 received MsgVoteResp from cec33aa8f0724833 at term 2"}
	{"level":"info","ts":"2024-08-15T01:33:59.541355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cec33aa8f0724833 became leader at term 2"}
	{"level":"info","ts":"2024-08-15T01:33:59.541381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cec33aa8f0724833 elected leader cec33aa8f0724833 at term 2"}
	{"level":"info","ts":"2024-08-15T01:33:59.545338Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:33:59.549533Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"cec33aa8f0724833","local-member-attributes":"{Name:embed-certs-190398 ClientURLs:[https://192.168.72.151:2379]}","request-path":"/0/members/cec33aa8f0724833/attributes","cluster-id":"31c137043c99215d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T01:33:59.549570Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:33:59.549846Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:33:59.552665Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:33:59.557614Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T01:33:59.560441Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31c137043c99215d","local-member-id":"cec33aa8f0724833","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:33:59.560633Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:33:59.562268Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:33:59.563444Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:33:59.565833Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T01:33:59.569241Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T01:33:59.570129Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.151:2379"}
	
	
	==> kernel <==
	 01:43:20 up 14 min,  0 users,  load average: 0.25, 0.18, 0.14
	Linux embed-certs-190398 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [293849baffb776e957f241f40b637fb7c4a81bf2aa9f5f1e804a2cef6a368813] <==
	W0815 01:33:55.000868       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.008771       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.064072       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.160813       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.161346       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.203093       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.216624       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.271875       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.322063       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.327486       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.480889       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.493647       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.522263       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.530729       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.562460       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.578873       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.600094       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.625869       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.693248       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.698933       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.851040       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.901960       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:56.031066       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:56.102718       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:56.191271       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fb4506ce769245994e30842e485ac09f3de96303c68d5c1beaef90f8b8a35946] <==
	E0815 01:39:02.792630       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0815 01:39:02.792717       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:39:02.793894       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:39:02.793952       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:40:02.794075       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:40:02.794146       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0815 01:40:02.794386       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:40:02.794462       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:40:02.795295       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:40:02.796418       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:42:02.795927       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:42:02.796309       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0815 01:42:02.797435       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:42:02.797575       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:42:02.797761       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:42:02.798982       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1aa99cc6c43fc2f9a4455c9f2ed3323fea6bd332c4e85ee9fe56851a182d64b7] <==
	E0815 01:38:08.789287       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:38:09.253911       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:38:38.796862       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:38:39.264645       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:39:08.803570       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:39:09.272557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:39:20.576361       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-190398"
	E0815 01:39:38.809976       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:39:39.280905       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:40:08.819602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:40:09.290865       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:40:12.414048       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="240.661µs"
	I0815 01:40:26.412259       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="1.094201ms"
	E0815 01:40:38.826756       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:40:39.298445       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:41:08.833777       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:41:09.305645       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:41:38.840805       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:41:39.313175       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:42:08.848060       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:42:09.321969       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:42:38.855426       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:42:39.331995       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:43:08.861797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:43:09.339639       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [31fff97fba4f249d22ae559a3fe50e7b931e5c20404aaacbfc8a4ab2e147a813] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:34:10.674829       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 01:34:10.694038       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.151"]
	E0815 01:34:10.694131       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 01:34:10.949426       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 01:34:10.949513       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 01:34:10.949586       1 server_linux.go:169] "Using iptables Proxier"
	I0815 01:34:10.959037       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 01:34:10.963067       1 server.go:483] "Version info" version="v1.31.0"
	I0815 01:34:10.976764       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:34:10.992558       1 config.go:197] "Starting service config controller"
	I0815 01:34:10.999849       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 01:34:10.999937       1 config.go:104] "Starting endpoint slice config controller"
	I0815 01:34:10.999946       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 01:34:11.000527       1 config.go:326] "Starting node config controller"
	I0815 01:34:11.000535       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 01:34:11.101310       1 shared_informer.go:320] Caches are synced for node config
	I0815 01:34:11.101407       1 shared_informer.go:320] Caches are synced for service config
	I0815 01:34:11.101458       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [18cc0a0d4ab8d0c4b6af0fba77cc19d18df1c7fa7512f15ed521c1dae749f1d5] <==
	W0815 01:34:01.849127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 01:34:01.849160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:01.849247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 01:34:01.849281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:01.850241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 01:34:01.850285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:01.850464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 01:34:01.850502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:01.850548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 01:34:01.850563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:02.751989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 01:34:02.752166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:02.827223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 01:34:02.827330       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:02.838415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 01:34:02.838512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:02.849133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 01:34:02.849226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:02.922378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 01:34:02.922428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:02.953605       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 01:34:02.953653       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 01:34:03.024363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 01:34:03.024412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0815 01:34:05.928391       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 01:42:14 embed-certs-190398 kubelet[2907]: E0815 01:42:14.506169    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686134505866030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:14 embed-certs-190398 kubelet[2907]: E0815 01:42:14.506537    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686134505866030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:15 embed-certs-190398 kubelet[2907]: E0815 01:42:15.395477    2907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4ldv7" podUID="ea1c5492-373d-445c-a135-b91569186449"
	Aug 15 01:42:24 embed-certs-190398 kubelet[2907]: E0815 01:42:24.508597    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686144508342917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:24 embed-certs-190398 kubelet[2907]: E0815 01:42:24.508646    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686144508342917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:27 embed-certs-190398 kubelet[2907]: E0815 01:42:27.395828    2907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4ldv7" podUID="ea1c5492-373d-445c-a135-b91569186449"
	Aug 15 01:42:34 embed-certs-190398 kubelet[2907]: E0815 01:42:34.510045    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686154509698380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:34 embed-certs-190398 kubelet[2907]: E0815 01:42:34.510111    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686154509698380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:39 embed-certs-190398 kubelet[2907]: E0815 01:42:39.396292    2907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4ldv7" podUID="ea1c5492-373d-445c-a135-b91569186449"
	Aug 15 01:42:44 embed-certs-190398 kubelet[2907]: E0815 01:42:44.512423    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686164511946048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:44 embed-certs-190398 kubelet[2907]: E0815 01:42:44.512451    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686164511946048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:51 embed-certs-190398 kubelet[2907]: E0815 01:42:51.395846    2907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4ldv7" podUID="ea1c5492-373d-445c-a135-b91569186449"
	Aug 15 01:42:54 embed-certs-190398 kubelet[2907]: E0815 01:42:54.514325    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686174513915376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:42:54 embed-certs-190398 kubelet[2907]: E0815 01:42:54.514362    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686174513915376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:04 embed-certs-190398 kubelet[2907]: E0815 01:43:04.399281    2907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4ldv7" podUID="ea1c5492-373d-445c-a135-b91569186449"
	Aug 15 01:43:04 embed-certs-190398 kubelet[2907]: E0815 01:43:04.413097    2907 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 01:43:04 embed-certs-190398 kubelet[2907]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 01:43:04 embed-certs-190398 kubelet[2907]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 01:43:04 embed-certs-190398 kubelet[2907]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 01:43:04 embed-certs-190398 kubelet[2907]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 01:43:04 embed-certs-190398 kubelet[2907]: E0815 01:43:04.516330    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686184516026456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:04 embed-certs-190398 kubelet[2907]: E0815 01:43:04.516365    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686184516026456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:14 embed-certs-190398 kubelet[2907]: E0815 01:43:14.518395    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686194517936772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:14 embed-certs-190398 kubelet[2907]: E0815 01:43:14.518435    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686194517936772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:18 embed-certs-190398 kubelet[2907]: E0815 01:43:18.398068    2907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4ldv7" podUID="ea1c5492-373d-445c-a135-b91569186449"
	
	
	==> storage-provisioner [7e19c80b54c6a0fd2f130825b9928566ec4fd02360f7e7ceb57baebfb1f9ecde] <==
	I0815 01:34:11.666057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 01:34:11.678664       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 01:34:11.678784       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 01:34:11.691123       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 01:34:11.691267       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"834cb5be-434c-4bf7-93c0-c8e1bed0fb8c", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-190398_2b6cb8f1-cfd7-4443-84f8-49ea296b44b4 became leader
	I0815 01:34:11.691441       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-190398_2b6cb8f1-cfd7-4443-84f8-49ea296b44b4!
	I0815 01:34:11.792228       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-190398_2b6cb8f1-cfd7-4443-84f8-49ea296b44b4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-190398 -n embed-certs-190398
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-190398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4ldv7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-190398 describe pod metrics-server-6867b74b74-4ldv7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-190398 describe pod metrics-server-6867b74b74-4ldv7: exit status 1 (62.324672ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4ldv7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-190398 describe pod metrics-server-6867b74b74-4ldv7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-884893 -n no-preload-884893
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-15 01:44:10.679564746 +0000 UTC m=+5925.379796341
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-884893 -n no-preload-884893
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-884893 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-884893 logs -n 25: (2.05682034s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:20 UTC |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-884893             | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-294760 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | disable-driver-mounts-294760                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:23 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-190398            | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-390782        | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018537  | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-884893                  | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-190398                 | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-390782             | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018537       | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:26 UTC | 15 Aug 24 01:34 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:26:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:26:05.128952   67451 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:26:05.129201   67451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:26:05.129210   67451 out.go:304] Setting ErrFile to fd 2...
	I0815 01:26:05.129214   67451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:26:05.129371   67451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:26:05.129877   67451 out.go:298] Setting JSON to false
	I0815 01:26:05.130775   67451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7710,"bootTime":1723677455,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:26:05.130828   67451 start.go:139] virtualization: kvm guest
	I0815 01:26:05.133200   67451 out.go:177] * [default-k8s-diff-port-018537] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:26:05.134520   67451 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:26:05.134534   67451 notify.go:220] Checking for updates...
	I0815 01:26:05.136725   67451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:26:05.137871   67451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:26:05.138973   67451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:26:05.140126   67451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:26:05.141168   67451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:26:05.142477   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:26:05.142872   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:26:05.142931   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:26:05.157398   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0815 01:26:05.157792   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:26:05.158237   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:26:05.158271   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:26:05.158625   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:26:05.158791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:26:05.158998   67451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:26:05.159268   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:26:05.159298   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:26:05.173332   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0815 01:26:05.173671   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:26:05.174063   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:26:05.174085   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:26:05.174378   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:26:05.174558   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:26:05.209931   67451 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 01:26:04.417005   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:05.210993   67451 start.go:297] selected driver: kvm2
	I0815 01:26:05.211005   67451 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:05.211106   67451 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:26:05.211778   67451 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:26:05.211854   67451 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:26:05.226770   67451 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:26:05.227141   67451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:26:05.227174   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:26:05.227182   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:26:05.227228   67451 start.go:340] cluster config:
	{Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:05.227335   67451 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:26:05.228866   67451 out.go:177] * Starting "default-k8s-diff-port-018537" primary control-plane node in "default-k8s-diff-port-018537" cluster
	I0815 01:26:05.229784   67451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:26:05.229818   67451 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:26:05.229826   67451 cache.go:56] Caching tarball of preloaded images
	I0815 01:26:05.229905   67451 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:26:05.229916   67451 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:26:05.230017   67451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:26:05.230223   67451 start.go:360] acquireMachinesLock for default-k8s-diff-port-018537: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:26:07.488887   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:13.568939   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:16.640954   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:22.720929   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:25.792889   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:31.872926   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:34.944895   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:41.024886   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:44.096913   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:50.176957   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:53.249017   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:59.328928   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:02.400891   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:08.480935   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:11.552904   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:17.632939   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:20.704876   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:26.784922   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:29.856958   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:35.936895   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:39.008957   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:45.088962   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:48.160964   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:54.240971   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:57.312935   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:03.393014   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:06.464973   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:12.544928   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:15.616915   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:21.696904   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:24.768924   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:27.773197   66919 start.go:364] duration metric: took 3m57.538488178s to acquireMachinesLock for "old-k8s-version-390782"
	I0815 01:28:27.773249   66919 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:27.773269   66919 fix.go:54] fixHost starting: 
	I0815 01:28:27.773597   66919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:27.773632   66919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:27.788757   66919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0815 01:28:27.789155   66919 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:27.789612   66919 main.go:141] libmachine: Using API Version  1
	I0815 01:28:27.789645   66919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:27.789952   66919 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:27.790122   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:27.790265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetState
	I0815 01:28:27.791742   66919 fix.go:112] recreateIfNeeded on old-k8s-version-390782: state=Stopped err=<nil>
	I0815 01:28:27.791773   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	W0815 01:28:27.791930   66919 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:27.793654   66919 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-390782" ...
	I0815 01:28:27.794650   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .Start
	I0815 01:28:27.794798   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring networks are active...
	I0815 01:28:27.795554   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network default is active
	I0815 01:28:27.795835   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network mk-old-k8s-version-390782 is active
	I0815 01:28:27.796194   66919 main.go:141] libmachine: (old-k8s-version-390782) Getting domain xml...
	I0815 01:28:27.797069   66919 main.go:141] libmachine: (old-k8s-version-390782) Creating domain...
	I0815 01:28:28.999562   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting to get IP...
	I0815 01:28:29.000288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.000697   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.000787   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.000698   67979 retry.go:31] will retry after 209.337031ms: waiting for machine to come up
	I0815 01:28:29.212345   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.212839   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.212865   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.212796   67979 retry.go:31] will retry after 252.542067ms: waiting for machine to come up
	I0815 01:28:29.467274   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.467659   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.467685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.467607   67979 retry.go:31] will retry after 412.932146ms: waiting for machine to come up
	I0815 01:28:29.882217   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.882643   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.882672   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.882601   67979 retry.go:31] will retry after 526.991017ms: waiting for machine to come up
	I0815 01:28:27.770766   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:27.770800   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:28:27.771142   66492 buildroot.go:166] provisioning hostname "no-preload-884893"
	I0815 01:28:27.771173   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:28:27.771381   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:28:27.773059   66492 machine.go:97] duration metric: took 4m37.432079731s to provisionDockerMachine
	I0815 01:28:27.773102   66492 fix.go:56] duration metric: took 4m37.453608342s for fixHost
	I0815 01:28:27.773107   66492 start.go:83] releasing machines lock for "no-preload-884893", held for 4m37.453640668s
	W0815 01:28:27.773125   66492 start.go:714] error starting host: provision: host is not running
	W0815 01:28:27.773209   66492 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0815 01:28:27.773219   66492 start.go:729] Will try again in 5 seconds ...
	I0815 01:28:30.411443   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:30.411819   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:30.411881   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:30.411794   67979 retry.go:31] will retry after 758.953861ms: waiting for machine to come up
	I0815 01:28:31.172721   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.173099   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.173131   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.173045   67979 retry.go:31] will retry after 607.740613ms: waiting for machine to come up
	I0815 01:28:31.782922   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.783406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.783434   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.783343   67979 retry.go:31] will retry after 738.160606ms: waiting for machine to come up
	I0815 01:28:32.523257   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:32.523685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:32.523716   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:32.523625   67979 retry.go:31] will retry after 904.54249ms: waiting for machine to come up
	I0815 01:28:33.430286   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:33.430690   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:33.430722   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:33.430637   67979 retry.go:31] will retry after 1.55058959s: waiting for machine to come up
	I0815 01:28:34.983386   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:34.983838   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:34.983870   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:34.983788   67979 retry.go:31] will retry after 1.636768205s: waiting for machine to come up
	I0815 01:28:32.775084   66492 start.go:360] acquireMachinesLock for no-preload-884893: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:28:36.622595   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:36.623058   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:36.623083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:36.622994   67979 retry.go:31] will retry after 1.777197126s: waiting for machine to come up
	I0815 01:28:38.401812   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:38.402289   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:38.402319   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:38.402247   67979 retry.go:31] will retry after 3.186960364s: waiting for machine to come up
	I0815 01:28:41.592635   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:41.593067   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:41.593093   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:41.593018   67979 retry.go:31] will retry after 3.613524245s: waiting for machine to come up
	I0815 01:28:46.469326   67000 start.go:364] duration metric: took 4m10.840663216s to acquireMachinesLock for "embed-certs-190398"
	I0815 01:28:46.469405   67000 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:46.469425   67000 fix.go:54] fixHost starting: 
	I0815 01:28:46.469913   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:46.469951   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:46.486446   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0815 01:28:46.486871   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:46.487456   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:28:46.487491   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:46.487832   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:46.488037   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:28:46.488198   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:28:46.489804   67000 fix.go:112] recreateIfNeeded on embed-certs-190398: state=Stopped err=<nil>
	I0815 01:28:46.489863   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	W0815 01:28:46.490033   67000 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:46.492240   67000 out.go:177] * Restarting existing kvm2 VM for "embed-certs-190398" ...
	I0815 01:28:45.209122   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.209617   66919 main.go:141] libmachine: (old-k8s-version-390782) Found IP for machine: 192.168.50.21
	I0815 01:28:45.209639   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserving static IP address...
	I0815 01:28:45.209657   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has current primary IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.210115   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserved static IP address: 192.168.50.21
	I0815 01:28:45.210138   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting for SSH to be available...
	I0815 01:28:45.210160   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.210188   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | skip adding static IP to network mk-old-k8s-version-390782 - found existing host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"}
	I0815 01:28:45.210204   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Getting to WaitForSSH function...
	I0815 01:28:45.212727   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213127   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.213153   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213307   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH client type: external
	I0815 01:28:45.213354   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa (-rw-------)
	I0815 01:28:45.213388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:28:45.213406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | About to run SSH command:
	I0815 01:28:45.213437   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | exit 0
	I0815 01:28:45.340616   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | SSH cmd err, output: <nil>: 
	I0815 01:28:45.341118   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetConfigRaw
	I0815 01:28:45.341848   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.344534   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.344934   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.344967   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.345196   66919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/config.json ...
	I0815 01:28:45.345414   66919 machine.go:94] provisionDockerMachine start ...
	I0815 01:28:45.345433   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:45.345699   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.347935   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348249   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.348278   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348438   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.348609   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348797   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348957   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.349117   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.349324   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.349337   66919 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:28:45.456668   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:28:45.456701   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.456959   66919 buildroot.go:166] provisioning hostname "old-k8s-version-390782"
	I0815 01:28:45.456987   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.457148   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.460083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460425   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.460453   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460613   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.460783   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.460924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.461039   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.461180   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.461392   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.461416   66919 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-390782 && echo "old-k8s-version-390782" | sudo tee /etc/hostname
	I0815 01:28:45.582108   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-390782
	
	I0815 01:28:45.582136   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.585173   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585556   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.585590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585795   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.585989   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586131   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586253   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.586445   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.586648   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.586667   66919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-390782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-390782/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-390782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:28:45.700737   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:45.700778   66919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:28:45.700802   66919 buildroot.go:174] setting up certificates
	I0815 01:28:45.700812   66919 provision.go:84] configureAuth start
	I0815 01:28:45.700821   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.701079   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.704006   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704384   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.704416   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704593   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.706737   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707018   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.707041   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707213   66919 provision.go:143] copyHostCerts
	I0815 01:28:45.707299   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:28:45.707324   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:28:45.707408   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:28:45.707528   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:28:45.707537   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:28:45.707576   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:28:45.707657   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:28:45.707666   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:28:45.707701   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:28:45.707771   66919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-390782 san=[127.0.0.1 192.168.50.21 localhost minikube old-k8s-version-390782]
	I0815 01:28:45.787190   66919 provision.go:177] copyRemoteCerts
	I0815 01:28:45.787256   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:28:45.787287   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.790159   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790542   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.790590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790735   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.790924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.791097   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.791217   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:45.874561   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:28:45.897869   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 01:28:45.923862   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:28:45.950038   66919 provision.go:87] duration metric: took 249.211016ms to configureAuth
	I0815 01:28:45.950065   66919 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:28:45.950301   66919 config.go:182] Loaded profile config "old-k8s-version-390782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:28:45.950412   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.953288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953746   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.953778   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953902   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.954098   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954358   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954569   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.954784   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.954953   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.954967   66919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:28:46.228321   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:28:46.228349   66919 machine.go:97] duration metric: took 882.921736ms to provisionDockerMachine
	I0815 01:28:46.228363   66919 start.go:293] postStartSetup for "old-k8s-version-390782" (driver="kvm2")
	I0815 01:28:46.228375   66919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:28:46.228401   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.228739   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:28:46.228774   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.231605   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.231993   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.232020   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.232216   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.232419   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.232698   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.232919   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.319433   66919 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:28:46.323340   66919 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:28:46.323373   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:28:46.323451   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:28:46.323555   66919 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:28:46.323658   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:28:46.332594   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:46.354889   66919 start.go:296] duration metric: took 126.511194ms for postStartSetup
	I0815 01:28:46.354930   66919 fix.go:56] duration metric: took 18.581671847s for fixHost
	I0815 01:28:46.354950   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.357987   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358251   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.358277   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358509   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.358747   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.358934   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.359092   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.359240   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:46.359425   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:46.359438   66919 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:28:46.469167   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685326.429908383
	
	I0815 01:28:46.469192   66919 fix.go:216] guest clock: 1723685326.429908383
	I0815 01:28:46.469202   66919 fix.go:229] Guest: 2024-08-15 01:28:46.429908383 +0000 UTC Remote: 2024-08-15 01:28:46.354934297 +0000 UTC m=+256.257437765 (delta=74.974086ms)
	I0815 01:28:46.469231   66919 fix.go:200] guest clock delta is within tolerance: 74.974086ms
	I0815 01:28:46.469236   66919 start.go:83] releasing machines lock for "old-k8s-version-390782", held for 18.696013068s
	I0815 01:28:46.469264   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.469527   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:46.472630   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473053   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.473082   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473746   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473931   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473998   66919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:28:46.474048   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.474159   66919 ssh_runner.go:195] Run: cat /version.json
	I0815 01:28:46.474188   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.476984   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477012   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477421   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477445   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477465   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477499   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477615   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477719   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477784   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477845   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477907   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477975   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.478048   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.585745   66919 ssh_runner.go:195] Run: systemctl --version
	I0815 01:28:46.592135   66919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:28:46.731888   66919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:28:46.739171   66919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:28:46.739238   66919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:28:46.760211   66919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:28:46.760232   66919 start.go:495] detecting cgroup driver to use...
	I0815 01:28:46.760316   66919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:28:46.778483   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:28:46.791543   66919 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:28:46.791632   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:28:46.804723   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:28:46.818794   66919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:28:46.931242   66919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:28:47.091098   66919 docker.go:233] disabling docker service ...
	I0815 01:28:47.091177   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:28:47.105150   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:28:47.117485   66919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:28:47.236287   66919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:28:47.376334   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:28:47.389397   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:28:47.406551   66919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 01:28:47.406627   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.416736   66919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:28:47.416803   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.427000   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.437833   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.449454   66919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:28:47.460229   66919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:28:47.469737   66919 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:28:47.469800   66919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:28:47.482270   66919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:28:47.491987   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:47.624462   66919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:28:47.759485   66919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:28:47.759546   66919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:28:47.764492   66919 start.go:563] Will wait 60s for crictl version
	I0815 01:28:47.764545   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:47.767890   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:28:47.814241   66919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:28:47.814342   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.842933   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.873241   66919 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 01:28:47.874283   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:47.877389   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.877763   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:47.877793   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.878008   66919 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 01:28:47.881794   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:47.893270   66919 kubeadm.go:883] updating cluster {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:28:47.893412   66919 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 01:28:47.893466   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:47.939402   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:47.939489   66919 ssh_runner.go:195] Run: which lz4
	I0815 01:28:47.943142   66919 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:28:47.947165   66919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:28:47.947191   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 01:28:49.418409   66919 crio.go:462] duration metric: took 1.475291539s to copy over tarball
	I0815 01:28:49.418479   66919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:28:46.493529   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Start
	I0815 01:28:46.493725   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring networks are active...
	I0815 01:28:46.494472   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring network default is active
	I0815 01:28:46.494805   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring network mk-embed-certs-190398 is active
	I0815 01:28:46.495206   67000 main.go:141] libmachine: (embed-certs-190398) Getting domain xml...
	I0815 01:28:46.496037   67000 main.go:141] libmachine: (embed-certs-190398) Creating domain...
	I0815 01:28:47.761636   67000 main.go:141] libmachine: (embed-certs-190398) Waiting to get IP...
	I0815 01:28:47.762736   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:47.763100   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:47.763157   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:47.763070   68098 retry.go:31] will retry after 304.161906ms: waiting for machine to come up
	I0815 01:28:48.068645   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.069177   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.069204   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.069148   68098 retry.go:31] will retry after 275.006558ms: waiting for machine to come up
	I0815 01:28:48.345793   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.346294   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.346331   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.346238   68098 retry.go:31] will retry after 325.359348ms: waiting for machine to come up
	I0815 01:28:48.673903   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.674489   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.674513   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.674447   68098 retry.go:31] will retry after 547.495848ms: waiting for machine to come up
	I0815 01:28:49.223465   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:49.224028   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:49.224062   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:49.223982   68098 retry.go:31] will retry after 471.418796ms: waiting for machine to come up
	I0815 01:28:49.696567   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:49.697064   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:49.697093   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:49.697019   68098 retry.go:31] will retry after 871.173809ms: waiting for machine to come up
	I0815 01:28:52.212767   66919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.794261663s)
	I0815 01:28:52.212795   66919 crio.go:469] duration metric: took 2.794358617s to extract the tarball
	I0815 01:28:52.212803   66919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:28:52.254542   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:52.286548   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:52.286571   66919 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:28:52.286651   66919 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.286675   66919 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 01:28:52.286687   66919 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.286684   66919 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.286704   66919 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.286645   66919 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.286672   66919 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.286649   66919 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288433   66919 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.288441   66919 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.288473   66919 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.288446   66919 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.288429   66919 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.288633   66919 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 01:28:52.526671   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 01:28:52.548397   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.556168   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.560115   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.563338   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.566306   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.576900   66919 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 01:28:52.576955   66919 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 01:28:52.576999   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.579694   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.639727   66919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 01:28:52.639778   66919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.639828   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.697299   66919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 01:28:52.697346   66919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.697397   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.709988   66919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 01:28:52.710026   66919 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 01:28:52.710051   66919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.710072   66919 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.710101   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710109   66919 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 01:28:52.710121   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710128   66919 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.710132   66919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 01:28:52.710146   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.710102   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.710159   66919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.710177   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.710159   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710198   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.768699   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.768764   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.768837   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.768892   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.768933   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.768954   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.800404   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.893131   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.893174   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.893241   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.918186   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.918203   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.918205   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.946507   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:53.037776   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:53.037991   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:53.039379   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:53.077479   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:53.077542   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 01:28:53.077559   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 01:28:53.096763   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 01:28:53.138129   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:53.153330   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 01:28:53.153366   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 01:28:53.153368   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 01:28:53.162469   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 01:28:53.292377   66919 cache_images.go:92] duration metric: took 1.005786902s to LoadCachedImages
	W0815 01:28:53.292485   66919 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0815 01:28:53.292503   66919 kubeadm.go:934] updating node { 192.168.50.21 8443 v1.20.0 crio true true} ...
	I0815 01:28:53.292682   66919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-390782 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:28:53.292781   66919 ssh_runner.go:195] Run: crio config
	I0815 01:28:53.339927   66919 cni.go:84] Creating CNI manager for ""
	I0815 01:28:53.339957   66919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:28:53.339979   66919 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:28:53.340009   66919 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.21 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-390782 NodeName:old-k8s-version-390782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 01:28:53.340183   66919 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-390782"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:28:53.340278   66919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 01:28:53.350016   66919 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:28:53.350117   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:28:53.359379   66919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 01:28:53.375719   66919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:28:53.392054   66919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 01:28:53.409122   66919 ssh_runner.go:195] Run: grep 192.168.50.21	control-plane.minikube.internal$ /etc/hosts
	I0815 01:28:53.412646   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:53.423917   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:53.560712   66919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:28:53.576488   66919 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782 for IP: 192.168.50.21
	I0815 01:28:53.576512   66919 certs.go:194] generating shared ca certs ...
	I0815 01:28:53.576530   66919 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:53.576748   66919 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:28:53.576823   66919 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:28:53.576837   66919 certs.go:256] generating profile certs ...
	I0815 01:28:53.576975   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.key
	I0815 01:28:53.577044   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key.d79afed6
	I0815 01:28:53.577113   66919 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key
	I0815 01:28:53.577274   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:28:53.577323   66919 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:28:53.577337   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:28:53.577369   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:28:53.577400   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:28:53.577431   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:28:53.577529   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:53.578239   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:28:53.622068   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:28:53.648947   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:28:53.681678   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:28:53.719636   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 01:28:53.744500   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:28:53.777941   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:28:53.810631   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:28:53.832906   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:28:53.854487   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:28:53.876448   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:28:53.898487   66919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:28:53.914102   66919 ssh_runner.go:195] Run: openssl version
	I0815 01:28:53.919563   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:28:53.929520   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933730   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933775   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.939056   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:28:53.948749   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:28:53.958451   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962624   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962669   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.967800   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:28:53.977228   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:28:53.986801   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990797   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990842   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.995930   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:28:54.005862   66919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:28:54.010115   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:28:54.015861   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:28:54.021980   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:28:54.028344   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:28:54.034172   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:28:54.040316   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:28:54.046525   66919 kubeadm.go:392] StartCluster: {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:28:54.046624   66919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:28:54.046671   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.086420   66919 cri.go:89] found id: ""
	I0815 01:28:54.086498   66919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:28:54.096425   66919 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:28:54.096449   66919 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:28:54.096500   66919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:28:54.106217   66919 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:28:54.107254   66919 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-390782" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:28:54.107872   66919 kubeconfig.go:62] /home/jenkins/minikube-integration/19443-13088/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-390782" cluster setting kubeconfig missing "old-k8s-version-390782" context setting]
	I0815 01:28:54.109790   66919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:54.140029   66919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:28:54.150180   66919 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.21
	I0815 01:28:54.150237   66919 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:28:54.150251   66919 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:28:54.150308   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.186400   66919 cri.go:89] found id: ""
	I0815 01:28:54.186485   66919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:28:54.203351   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:28:54.212828   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:28:54.212849   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:28:54.212910   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:28:54.221577   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:28:54.221641   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:28:54.230730   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:28:54.239213   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:28:54.239279   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:28:54.248268   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.256909   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:28:54.256968   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.266043   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:28:54.276366   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:28:54.276432   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:28:54.285945   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:28:54.295262   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:54.419237   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.098102   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:50.569917   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:50.570436   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:50.570465   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:50.570394   68098 retry.go:31] will retry after 775.734951ms: waiting for machine to come up
	I0815 01:28:51.347459   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:51.347917   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:51.347944   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:51.347869   68098 retry.go:31] will retry after 1.319265032s: waiting for machine to come up
	I0815 01:28:52.668564   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:52.669049   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:52.669116   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:52.669015   68098 retry.go:31] will retry after 1.765224181s: waiting for machine to come up
	I0815 01:28:54.435556   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:54.436039   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:54.436071   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:54.435975   68098 retry.go:31] will retry after 1.545076635s: waiting for machine to come up
	I0815 01:28:55.318597   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.420419   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.514727   66919 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:28:55.514825   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.015883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.515816   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.015709   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.515895   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.015127   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.515796   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.014975   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.515893   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:00.015918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:55.982693   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:55.983288   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:55.983328   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:55.983112   68098 retry.go:31] will retry after 2.788039245s: waiting for machine to come up
	I0815 01:28:58.773761   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:58.774166   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:58.774194   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:58.774087   68098 retry.go:31] will retry after 2.531335813s: waiting for machine to come up
	I0815 01:29:00.514933   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.015014   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.515780   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.015534   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.515502   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.015539   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.515643   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.015544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.515786   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:05.015882   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.309051   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:01.309593   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:29:01.309634   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:29:01.309552   68098 retry.go:31] will retry after 3.239280403s: waiting for machine to come up
	I0815 01:29:04.552370   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.552978   67000 main.go:141] libmachine: (embed-certs-190398) Found IP for machine: 192.168.72.151
	I0815 01:29:04.553002   67000 main.go:141] libmachine: (embed-certs-190398) Reserving static IP address...
	I0815 01:29:04.553047   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has current primary IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.553427   67000 main.go:141] libmachine: (embed-certs-190398) Reserved static IP address: 192.168.72.151
	I0815 01:29:04.553452   67000 main.go:141] libmachine: (embed-certs-190398) Waiting for SSH to be available...
	I0815 01:29:04.553481   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "embed-certs-190398", mac: "52:54:00:5a:91:1a", ip: "192.168.72.151"} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.553510   67000 main.go:141] libmachine: (embed-certs-190398) DBG | skip adding static IP to network mk-embed-certs-190398 - found existing host DHCP lease matching {name: "embed-certs-190398", mac: "52:54:00:5a:91:1a", ip: "192.168.72.151"}
	I0815 01:29:04.553525   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Getting to WaitForSSH function...
	I0815 01:29:04.555694   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.556036   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.556067   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.556168   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Using SSH client type: external
	I0815 01:29:04.556189   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa (-rw-------)
	I0815 01:29:04.556221   67000 main.go:141] libmachine: (embed-certs-190398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:04.556235   67000 main.go:141] libmachine: (embed-certs-190398) DBG | About to run SSH command:
	I0815 01:29:04.556252   67000 main.go:141] libmachine: (embed-certs-190398) DBG | exit 0
	I0815 01:29:04.680599   67000 main.go:141] libmachine: (embed-certs-190398) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:04.680961   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetConfigRaw
	I0815 01:29:04.681526   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:04.683847   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.684244   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.684270   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.684531   67000 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/config.json ...
	I0815 01:29:04.684755   67000 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:04.684772   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:04.684989   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.687469   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.687823   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.687848   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.687972   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.688135   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.688267   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.688389   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.688525   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.688749   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.688761   67000 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:04.788626   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:04.788670   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:04.788914   67000 buildroot.go:166] provisioning hostname "embed-certs-190398"
	I0815 01:29:04.788940   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:04.789136   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.791721   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.792153   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.792198   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.792398   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.792580   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.792756   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.792861   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.793053   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.793293   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.793312   67000 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-190398 && echo "embed-certs-190398" | sudo tee /etc/hostname
	I0815 01:29:04.910133   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-190398
	
	I0815 01:29:04.910160   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.913241   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.913666   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.913701   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.913887   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.914131   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.914336   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.914491   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.914665   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.914884   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.914909   67000 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-190398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-190398/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-190398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:05.025052   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:05.025089   67000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:05.025115   67000 buildroot.go:174] setting up certificates
	I0815 01:29:05.025127   67000 provision.go:84] configureAuth start
	I0815 01:29:05.025139   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:05.025439   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:05.028224   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.028582   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.028618   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.028753   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.030960   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.031305   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.031335   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.031524   67000 provision.go:143] copyHostCerts
	I0815 01:29:05.031598   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:05.031608   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:05.031663   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:05.031745   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:05.031752   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:05.031773   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:05.031825   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:05.031832   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:05.031849   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:05.031909   67000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.embed-certs-190398 san=[127.0.0.1 192.168.72.151 embed-certs-190398 localhost minikube]
	I0815 01:29:05.246512   67000 provision.go:177] copyRemoteCerts
	I0815 01:29:05.246567   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:05.246590   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.249286   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.249570   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.249609   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.249736   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.249933   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.250109   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.250337   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.330596   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 01:29:05.352611   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:29:05.374001   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:05.394724   67000 provision.go:87] duration metric: took 369.584008ms to configureAuth
	I0815 01:29:05.394750   67000 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:05.394917   67000 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:05.394982   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.397305   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.397620   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.397658   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.397748   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.397924   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.398039   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.398150   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.398297   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:05.398465   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:05.398486   67000 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:05.893255   67451 start.go:364] duration metric: took 3m0.662991861s to acquireMachinesLock for "default-k8s-diff-port-018537"
	I0815 01:29:05.893347   67451 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:29:05.893356   67451 fix.go:54] fixHost starting: 
	I0815 01:29:05.893803   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:05.893846   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:05.910516   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I0815 01:29:05.910882   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:05.911391   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:05.911415   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:05.911748   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:05.911959   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:05.912088   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:05.913672   67451 fix.go:112] recreateIfNeeded on default-k8s-diff-port-018537: state=Stopped err=<nil>
	I0815 01:29:05.913699   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	W0815 01:29:05.913861   67451 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:29:05.915795   67451 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-018537" ...
	I0815 01:29:05.666194   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:05.666225   67000 machine.go:97] duration metric: took 981.45738ms to provisionDockerMachine
	I0815 01:29:05.666241   67000 start.go:293] postStartSetup for "embed-certs-190398" (driver="kvm2")
	I0815 01:29:05.666253   67000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:05.666275   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.666640   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:05.666671   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.669648   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.670098   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.670124   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.670300   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.670507   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.670677   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.670835   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.750950   67000 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:05.755040   67000 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:05.755066   67000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:05.755139   67000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:05.755244   67000 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:05.755366   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:05.764271   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:05.786563   67000 start.go:296] duration metric: took 120.295403ms for postStartSetup
	I0815 01:29:05.786609   67000 fix.go:56] duration metric: took 19.317192467s for fixHost
	I0815 01:29:05.786634   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.789273   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.789677   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.789708   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.789886   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.790082   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.790244   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.790371   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.790654   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:05.790815   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:05.790826   67000 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:05.893102   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685345.869278337
	
	I0815 01:29:05.893123   67000 fix.go:216] guest clock: 1723685345.869278337
	I0815 01:29:05.893131   67000 fix.go:229] Guest: 2024-08-15 01:29:05.869278337 +0000 UTC Remote: 2024-08-15 01:29:05.786613294 +0000 UTC m=+270.290281945 (delta=82.665043ms)
	I0815 01:29:05.893159   67000 fix.go:200] guest clock delta is within tolerance: 82.665043ms
	I0815 01:29:05.893165   67000 start.go:83] releasing machines lock for "embed-certs-190398", held for 19.423784798s
	I0815 01:29:05.893192   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.893484   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:05.896152   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.896528   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.896555   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.896735   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897183   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897392   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897480   67000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:05.897536   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.897681   67000 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:05.897704   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.900443   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900543   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900814   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.900845   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900873   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.900891   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.901123   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.901150   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.901342   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.901346   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.901531   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.901531   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.901708   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.901709   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:06.008891   67000 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:06.014975   67000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:06.158062   67000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:06.164485   67000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:06.164550   67000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:06.180230   67000 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:06.180250   67000 start.go:495] detecting cgroup driver to use...
	I0815 01:29:06.180301   67000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:06.197927   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:06.210821   67000 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:06.210885   67000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:06.225614   67000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:06.239266   67000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:06.357793   67000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:06.511990   67000 docker.go:233] disabling docker service ...
	I0815 01:29:06.512061   67000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:06.529606   67000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:06.547241   67000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:06.689512   67000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:06.807041   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:06.820312   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:06.837948   67000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:06.838011   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.848233   67000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:06.848311   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.858132   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.868009   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.879629   67000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:06.893713   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.907444   67000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.928032   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.943650   67000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:06.957750   67000 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:06.957805   67000 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:06.972288   67000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:06.982187   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:07.154389   67000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:07.287847   67000 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:07.287933   67000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:07.292283   67000 start.go:563] Will wait 60s for crictl version
	I0815 01:29:07.292342   67000 ssh_runner.go:195] Run: which crictl
	I0815 01:29:07.295813   67000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:07.332788   67000 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:07.332889   67000 ssh_runner.go:195] Run: crio --version
	I0815 01:29:07.359063   67000 ssh_runner.go:195] Run: crio --version
	I0815 01:29:07.387496   67000 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:05.917276   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Start
	I0815 01:29:05.917498   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring networks are active...
	I0815 01:29:05.918269   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring network default is active
	I0815 01:29:05.918599   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring network mk-default-k8s-diff-port-018537 is active
	I0815 01:29:05.919147   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Getting domain xml...
	I0815 01:29:05.919829   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Creating domain...
	I0815 01:29:07.208213   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting to get IP...
	I0815 01:29:07.209456   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.209848   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.209933   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.209843   68264 retry.go:31] will retry after 254.654585ms: waiting for machine to come up
	I0815 01:29:07.466248   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.466679   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.466708   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.466644   68264 retry.go:31] will retry after 285.54264ms: waiting for machine to come up
	I0815 01:29:07.754037   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.754537   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.754578   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.754511   68264 retry.go:31] will retry after 336.150506ms: waiting for machine to come up
	I0815 01:29:08.091923   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.092402   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.092444   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:08.092368   68264 retry.go:31] will retry after 591.285134ms: waiting for machine to come up
	I0815 01:29:08.685380   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.685707   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.685735   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:08.685690   68264 retry.go:31] will retry after 701.709425ms: waiting for machine to come up
	I0815 01:29:09.388574   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:09.389026   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:09.389053   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:09.388979   68264 retry.go:31] will retry after 916.264423ms: waiting for machine to come up
	I0815 01:29:05.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.015647   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.514952   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.014969   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.515614   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.015757   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.515184   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.014931   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.515381   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.015761   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.389220   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:07.392416   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:07.392842   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:07.392868   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:07.393095   67000 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:07.396984   67000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:07.410153   67000 kubeadm.go:883] updating cluster {Name:embed-certs-190398 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:07.410275   67000 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:07.410348   67000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:07.447193   67000 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:07.447255   67000 ssh_runner.go:195] Run: which lz4
	I0815 01:29:07.451046   67000 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:29:07.454808   67000 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:29:07.454836   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:29:08.696070   67000 crio.go:462] duration metric: took 1.245060733s to copy over tarball
	I0815 01:29:08.696174   67000 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:29:10.306552   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:10.306969   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:10.307001   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:10.306912   68264 retry.go:31] will retry after 1.186920529s: waiting for machine to come up
	I0815 01:29:11.494832   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:11.495288   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:11.495324   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:11.495213   68264 retry.go:31] will retry after 1.049148689s: waiting for machine to come up
	I0815 01:29:12.546492   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:12.546872   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:12.546898   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:12.546844   68264 retry.go:31] will retry after 1.689384408s: waiting for machine to come up
	I0815 01:29:14.237471   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:14.238081   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:14.238134   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:14.238011   68264 retry.go:31] will retry after 1.557759414s: waiting for machine to come up
	I0815 01:29:10.515131   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.515740   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.015002   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.515169   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.015676   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.515330   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.015193   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.515742   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.015837   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.809989   67000 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.113786525s)
	I0815 01:29:10.810014   67000 crio.go:469] duration metric: took 2.113915636s to extract the tarball
	I0815 01:29:10.810021   67000 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:29:10.845484   67000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:10.886403   67000 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:29:10.886424   67000 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:29:10.886433   67000 kubeadm.go:934] updating node { 192.168.72.151 8443 v1.31.0 crio true true} ...
	I0815 01:29:10.886550   67000 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-190398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:29:10.886646   67000 ssh_runner.go:195] Run: crio config
	I0815 01:29:10.933915   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:29:10.933946   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:10.933963   67000 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:29:10.933985   67000 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-190398 NodeName:embed-certs-190398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:29:10.934114   67000 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-190398"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:29:10.934179   67000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:29:10.943778   67000 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:29:10.943839   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:29:10.952852   67000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0815 01:29:10.968026   67000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:29:10.982813   67000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0815 01:29:10.998314   67000 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I0815 01:29:11.001818   67000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:11.012933   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:11.147060   67000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:11.170825   67000 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398 for IP: 192.168.72.151
	I0815 01:29:11.170850   67000 certs.go:194] generating shared ca certs ...
	I0815 01:29:11.170871   67000 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:11.171064   67000 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:29:11.171131   67000 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:29:11.171146   67000 certs.go:256] generating profile certs ...
	I0815 01:29:11.171251   67000 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/client.key
	I0815 01:29:11.171359   67000 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.key.7cdd5698
	I0815 01:29:11.171414   67000 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.key
	I0815 01:29:11.171556   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:29:11.171593   67000 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:29:11.171602   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:29:11.171624   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:29:11.171647   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:29:11.171676   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:29:11.171730   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:11.172346   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:29:11.208182   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:29:11.236641   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:29:11.277018   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:29:11.304926   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 01:29:11.335397   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:29:11.358309   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:29:11.380632   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:29:11.403736   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:29:11.425086   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:29:11.448037   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:29:11.470461   67000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:29:11.486415   67000 ssh_runner.go:195] Run: openssl version
	I0815 01:29:11.492028   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:29:11.502925   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.507270   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.507323   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.513051   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:29:11.523911   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:29:11.534614   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.538753   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.538813   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.544194   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:29:11.554387   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:29:11.564690   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.568810   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.568873   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.575936   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:29:11.589152   67000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:29:11.594614   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:29:11.601880   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:29:11.609471   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:29:11.617010   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:29:11.623776   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:29:11.629262   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:29:11.634708   67000 kubeadm.go:392] StartCluster: {Name:embed-certs-190398 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:29:11.634821   67000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:29:11.634890   67000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:11.676483   67000 cri.go:89] found id: ""
	I0815 01:29:11.676559   67000 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:29:11.686422   67000 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:29:11.686445   67000 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:29:11.686494   67000 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:29:11.695319   67000 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:29:11.696472   67000 kubeconfig.go:125] found "embed-certs-190398" server: "https://192.168.72.151:8443"
	I0815 01:29:11.699906   67000 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:29:11.709090   67000 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.151
	I0815 01:29:11.709119   67000 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:29:11.709145   67000 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:29:11.709211   67000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:11.742710   67000 cri.go:89] found id: ""
	I0815 01:29:11.742786   67000 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:29:11.758986   67000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:29:11.768078   67000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:29:11.768100   67000 kubeadm.go:157] found existing configuration files:
	
	I0815 01:29:11.768150   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:29:11.776638   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:29:11.776724   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:29:11.785055   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:29:11.793075   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:29:11.793127   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:29:11.801516   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:29:11.809527   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:29:11.809572   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:29:11.817855   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:29:11.826084   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:29:11.826157   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:29:11.835699   67000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:29:11.844943   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:11.961226   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.030548   67000 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069293244s)
	I0815 01:29:13.030577   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.218385   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.302667   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.397530   67000 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:29:13.397630   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.898538   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.398613   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.897833   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.397759   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.798041   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:15.798467   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:15.798512   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:15.798446   68264 retry.go:31] will retry after 2.538040218s: waiting for machine to come up
	I0815 01:29:18.338522   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:18.338961   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:18.338988   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:18.338910   68264 retry.go:31] will retry after 3.121146217s: waiting for machine to come up
	I0815 01:29:15.515901   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.015290   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.514956   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.015924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.515782   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.014890   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.515482   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.515830   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.015304   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.897957   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.910962   67000 api_server.go:72] duration metric: took 2.513430323s to wait for apiserver process to appear ...
	I0815 01:29:15.910999   67000 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:29:15.911033   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.650453   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:18.650485   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:18.650498   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.686925   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:18.686951   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:18.911228   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.915391   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:18.915424   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:19.412000   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:19.419523   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:19.419562   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:19.911102   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:19.918074   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:19.918110   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:20.411662   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:20.417395   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0815 01:29:20.423058   67000 api_server.go:141] control plane version: v1.31.0
	I0815 01:29:20.423081   67000 api_server.go:131] duration metric: took 4.512072378s to wait for apiserver health ...
	I0815 01:29:20.423089   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:29:20.423095   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:20.424876   67000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:29:20.426131   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:29:20.450961   67000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:29:20.474210   67000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:29:20.486417   67000 system_pods.go:59] 8 kube-system pods found
	I0815 01:29:20.486452   67000 system_pods.go:61] "coredns-6f6b679f8f-kgklr" [5e07a5eb-5ff5-4c1c-9fc7-0a266389c235] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:29:20.486463   67000 system_pods.go:61] "etcd-embed-certs-190398" [11567f44-26c0-4cdc-81f4-d7f88eb415e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:29:20.486480   67000 system_pods.go:61] "kube-apiserver-embed-certs-190398" [da9ce1f1-705f-4b23-ace7-794d277e5d44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:29:20.486495   67000 system_pods.go:61] "kube-controller-manager-embed-certs-190398" [0a4c8153-f94c-4d24-9d2f-38e3eebd8649] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:29:20.486509   67000 system_pods.go:61] "kube-proxy-bmddn" [50e8d666-29d5-45b6-82a7-608402dfb7b1] Running
	I0815 01:29:20.486515   67000 system_pods.go:61] "kube-scheduler-embed-certs-190398" [483d04a2-16c4-4c0d-81e2-dbdfa2141981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:29:20.486520   67000 system_pods.go:61] "metrics-server-6867b74b74-sfnng" [c2088569-2e49-4ccd-bd7c-bcd454e75b1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:29:20.486528   67000 system_pods.go:61] "storage-provisioner" [ad082138-0c63-43a5-8052-5a7126a6ec77] Running
	I0815 01:29:20.486534   67000 system_pods.go:74] duration metric: took 12.306432ms to wait for pod list to return data ...
	I0815 01:29:20.486546   67000 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:29:20.489727   67000 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:29:20.489751   67000 node_conditions.go:123] node cpu capacity is 2
	I0815 01:29:20.489763   67000 node_conditions.go:105] duration metric: took 3.21035ms to run NodePressure ...
	I0815 01:29:20.489782   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:21.461547   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:21.462048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:21.462083   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:21.462013   68264 retry.go:31] will retry after 4.52196822s: waiting for machine to come up
	I0815 01:29:20.515183   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.015283   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.515686   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.015404   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.515935   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.015577   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.515114   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.015146   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.515849   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:25.014883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.750707   67000 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:29:20.766067   67000 kubeadm.go:739] kubelet initialised
	I0815 01:29:20.766089   67000 kubeadm.go:740] duration metric: took 15.355118ms waiting for restarted kubelet to initialise ...
	I0815 01:29:20.766099   67000 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:20.771715   67000 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.778596   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.778617   67000 pod_ready.go:81] duration metric: took 6.879509ms for pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.778630   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.778638   67000 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.783422   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "etcd-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.783450   67000 pod_ready.go:81] duration metric: took 4.801812ms for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.783461   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "etcd-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.783473   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.788877   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.788896   67000 pod_ready.go:81] duration metric: took 5.41319ms for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.788904   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.788909   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:22.795340   67000 pod_ready.go:102] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:25.296907   67000 pod_ready.go:102] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:27.201181   66492 start.go:364] duration metric: took 54.426048174s to acquireMachinesLock for "no-preload-884893"
	I0815 01:29:27.201235   66492 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:29:27.201317   66492 fix.go:54] fixHost starting: 
	I0815 01:29:27.201776   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:27.201818   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:27.218816   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0815 01:29:27.219223   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:27.219731   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:29:27.219754   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:27.220146   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:27.220342   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:27.220507   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:29:27.221962   66492 fix.go:112] recreateIfNeeded on no-preload-884893: state=Stopped err=<nil>
	I0815 01:29:27.221988   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	W0815 01:29:27.222177   66492 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:29:27.224523   66492 out.go:177] * Restarting existing kvm2 VM for "no-preload-884893" ...
	I0815 01:29:25.986027   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.986585   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Found IP for machine: 192.168.39.223
	I0815 01:29:25.986616   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has current primary IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.986629   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Reserving static IP address...
	I0815 01:29:25.987034   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-018537", mac: "52:54:00:ec:53:52", ip: "192.168.39.223"} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:25.987066   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | skip adding static IP to network mk-default-k8s-diff-port-018537 - found existing host DHCP lease matching {name: "default-k8s-diff-port-018537", mac: "52:54:00:ec:53:52", ip: "192.168.39.223"}
	I0815 01:29:25.987085   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Reserved static IP address: 192.168.39.223
	I0815 01:29:25.987108   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for SSH to be available...
	I0815 01:29:25.987124   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Getting to WaitForSSH function...
	I0815 01:29:25.989426   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.989800   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:25.989831   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.989937   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Using SSH client type: external
	I0815 01:29:25.989962   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa (-rw-------)
	I0815 01:29:25.990011   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:25.990026   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | About to run SSH command:
	I0815 01:29:25.990048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | exit 0
	I0815 01:29:26.121218   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:26.121655   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetConfigRaw
	I0815 01:29:26.122265   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:26.125083   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.125483   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.125513   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.125757   67451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:29:26.125978   67451 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:26.126004   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:26.126235   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.128419   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.128787   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.128814   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.128963   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.129124   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.129274   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.129420   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.129603   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.129828   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.129843   67451 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:26.236866   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:26.236900   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.237136   67451 buildroot.go:166] provisioning hostname "default-k8s-diff-port-018537"
	I0815 01:29:26.237158   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.237334   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.240243   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.240760   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.240791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.240959   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.241203   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.241415   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.241581   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.241741   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.241903   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.241916   67451 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-018537 && echo "default-k8s-diff-port-018537" | sudo tee /etc/hostname
	I0815 01:29:26.358127   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-018537
	
	I0815 01:29:26.358159   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.361276   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.361664   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.361694   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.361841   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.362013   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.362191   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.362368   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.362517   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.362704   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.362729   67451 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-018537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-018537/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-018537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:26.479326   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:26.479357   67451 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:26.479398   67451 buildroot.go:174] setting up certificates
	I0815 01:29:26.479411   67451 provision.go:84] configureAuth start
	I0815 01:29:26.479440   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.479791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:26.482464   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.482845   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.482873   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.483023   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.485502   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.485960   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.485995   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.486135   67451 provision.go:143] copyHostCerts
	I0815 01:29:26.486194   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:26.486214   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:26.486273   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:26.486384   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:26.486394   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:26.486419   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:26.486480   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:26.486487   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:26.486508   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:26.486573   67451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-018537 san=[127.0.0.1 192.168.39.223 default-k8s-diff-port-018537 localhost minikube]
	I0815 01:29:26.563251   67451 provision.go:177] copyRemoteCerts
	I0815 01:29:26.563309   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:26.563337   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.566141   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.566481   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.566506   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.566737   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.566947   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.567087   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.567208   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:26.650593   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:26.673166   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0815 01:29:26.695563   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:29:26.717169   67451 provision.go:87] duration metric: took 237.742408ms to configureAuth
	I0815 01:29:26.717198   67451 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:26.717373   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:26.717453   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.720247   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.720620   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.720648   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.720815   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.721007   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.721176   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.721302   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.721484   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.721663   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.721681   67451 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:26.972647   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:26.972691   67451 machine.go:97] duration metric: took 846.694776ms to provisionDockerMachine
	I0815 01:29:26.972706   67451 start.go:293] postStartSetup for "default-k8s-diff-port-018537" (driver="kvm2")
	I0815 01:29:26.972716   67451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:26.972731   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:26.973032   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:26.973053   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.975828   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.976300   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.976334   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.976531   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.976827   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.976999   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.977111   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.059130   67451 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:27.062867   67451 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:27.062893   67451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:27.062954   67451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:27.063024   67451 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:27.063119   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:27.072111   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:27.093976   67451 start.go:296] duration metric: took 121.256938ms for postStartSetup
	I0815 01:29:27.094023   67451 fix.go:56] duration metric: took 21.200666941s for fixHost
	I0815 01:29:27.094048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.096548   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.096881   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.096912   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.097059   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.097238   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.097400   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.097511   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.097664   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:27.097842   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:27.097858   67451 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:27.201028   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685367.180566854
	
	I0815 01:29:27.201053   67451 fix.go:216] guest clock: 1723685367.180566854
	I0815 01:29:27.201062   67451 fix.go:229] Guest: 2024-08-15 01:29:27.180566854 +0000 UTC Remote: 2024-08-15 01:29:27.094027897 +0000 UTC m=+201.997769057 (delta=86.538957ms)
	I0815 01:29:27.201100   67451 fix.go:200] guest clock delta is within tolerance: 86.538957ms
	I0815 01:29:27.201107   67451 start.go:83] releasing machines lock for "default-k8s-diff-port-018537", held for 21.307794339s
	I0815 01:29:27.201135   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.201522   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:27.204278   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.204674   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.204703   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.204934   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205501   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205713   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205800   67451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:27.205849   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.206127   67451 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:27.206149   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.208688   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.208858   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209066   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.209092   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209394   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.209551   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.209552   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.209584   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209741   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.209748   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.209952   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.210001   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.210090   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.210256   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.293417   67451 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:27.329491   67451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:27.473782   67451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:27.480357   67451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:27.480432   67451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:27.499552   67451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:27.499582   67451 start.go:495] detecting cgroup driver to use...
	I0815 01:29:27.499650   67451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:27.515626   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:27.534025   67451 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:27.534098   67451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:27.547536   67451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:27.561135   67451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:27.672622   67451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:27.832133   67451 docker.go:233] disabling docker service ...
	I0815 01:29:27.832210   67451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:27.845647   67451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:27.858233   67451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:27.985504   67451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:28.119036   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:28.133844   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:28.151116   67451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:28.151188   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.162173   67451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:28.162250   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.171954   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.182363   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.192943   67451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:28.203684   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.214360   67451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.230572   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.241283   67451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:28.250743   67451 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:28.250804   67451 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:28.263655   67451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:28.273663   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:28.408232   67451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:28.558860   67451 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:28.558933   67451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:28.564390   67451 start.go:563] Will wait 60s for crictl version
	I0815 01:29:28.564508   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:29:28.568351   67451 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:28.616006   67451 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:28.616094   67451 ssh_runner.go:195] Run: crio --version
	I0815 01:29:28.642621   67451 ssh_runner.go:195] Run: crio --version
	I0815 01:29:28.671150   67451 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:28.672626   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:28.675626   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:28.676004   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:28.676038   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:28.676296   67451 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:28.680836   67451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:28.694402   67451 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:28.694519   67451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:28.694574   67451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:28.730337   67451 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:28.730401   67451 ssh_runner.go:195] Run: which lz4
	I0815 01:29:28.734226   67451 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:29:28.738162   67451 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:29:28.738185   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:29:30.016492   67451 crio.go:462] duration metric: took 1.282301387s to copy over tarball
	I0815 01:29:30.016571   67451 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:29:25.515881   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.015741   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.515122   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.014889   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.515108   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.015604   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.515658   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.015319   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.515225   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.015561   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.225775   66492 main.go:141] libmachine: (no-preload-884893) Calling .Start
	I0815 01:29:27.225974   66492 main.go:141] libmachine: (no-preload-884893) Ensuring networks are active...
	I0815 01:29:27.226702   66492 main.go:141] libmachine: (no-preload-884893) Ensuring network default is active
	I0815 01:29:27.227078   66492 main.go:141] libmachine: (no-preload-884893) Ensuring network mk-no-preload-884893 is active
	I0815 01:29:27.227577   66492 main.go:141] libmachine: (no-preload-884893) Getting domain xml...
	I0815 01:29:27.228376   66492 main.go:141] libmachine: (no-preload-884893) Creating domain...
	I0815 01:29:28.609215   66492 main.go:141] libmachine: (no-preload-884893) Waiting to get IP...
	I0815 01:29:28.610043   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:28.610440   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:28.610487   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:28.610415   68431 retry.go:31] will retry after 305.851347ms: waiting for machine to come up
	I0815 01:29:28.918245   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:28.918747   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:28.918770   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:28.918720   68431 retry.go:31] will retry after 368.045549ms: waiting for machine to come up
	I0815 01:29:29.288313   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:29.289013   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:29.289046   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:29.288958   68431 retry.go:31] will retry after 415.68441ms: waiting for machine to come up
	I0815 01:29:29.706767   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:29.707226   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:29.707249   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:29.707180   68431 retry.go:31] will retry after 575.538038ms: waiting for machine to come up
	I0815 01:29:26.795064   67000 pod_ready.go:92] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:26.795085   67000 pod_ready.go:81] duration metric: took 6.006168181s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.795096   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bmddn" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.799159   67000 pod_ready.go:92] pod "kube-proxy-bmddn" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:26.799176   67000 pod_ready.go:81] duration metric: took 4.074526ms for pod "kube-proxy-bmddn" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.799184   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:28.805591   67000 pod_ready.go:102] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:30.306235   67000 pod_ready.go:92] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:30.306262   67000 pod_ready.go:81] duration metric: took 3.507070811s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:30.306273   67000 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:32.131219   67451 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.114619197s)
	I0815 01:29:32.131242   67451 crio.go:469] duration metric: took 2.114723577s to extract the tarball
	I0815 01:29:32.131249   67451 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:29:32.169830   67451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:32.217116   67451 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:29:32.217139   67451 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:29:32.217146   67451 kubeadm.go:934] updating node { 192.168.39.223 8444 v1.31.0 crio true true} ...
	I0815 01:29:32.217245   67451 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-018537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:29:32.217305   67451 ssh_runner.go:195] Run: crio config
	I0815 01:29:32.272237   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:29:32.272257   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:32.272270   67451 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:29:32.272292   67451 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.223 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-018537 NodeName:default-k8s-diff-port-018537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:29:32.272435   67451 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.223
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-018537"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:29:32.272486   67451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:29:32.282454   67451 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:29:32.282510   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:29:32.291448   67451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0815 01:29:32.307026   67451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:29:32.324183   67451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0815 01:29:32.339298   67451 ssh_runner.go:195] Run: grep 192.168.39.223	control-plane.minikube.internal$ /etc/hosts
	I0815 01:29:32.342644   67451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:32.353518   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:32.468014   67451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:32.484049   67451 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537 for IP: 192.168.39.223
	I0815 01:29:32.484075   67451 certs.go:194] generating shared ca certs ...
	I0815 01:29:32.484097   67451 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:32.484263   67451 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:29:32.484313   67451 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:29:32.484326   67451 certs.go:256] generating profile certs ...
	I0815 01:29:32.484436   67451 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.key
	I0815 01:29:32.484511   67451 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.key.141a85fa
	I0815 01:29:32.484564   67451 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.key
	I0815 01:29:32.484747   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:29:32.484787   67451 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:29:32.484797   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:29:32.484828   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:29:32.484869   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:29:32.484896   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:29:32.484953   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:32.485741   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:29:32.521657   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:29:32.556226   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:29:32.585724   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:29:32.619588   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 01:29:32.649821   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:29:32.677343   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:29:32.699622   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:29:32.721142   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:29:32.742388   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:29:32.766476   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:29:32.788341   67451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:29:32.803728   67451 ssh_runner.go:195] Run: openssl version
	I0815 01:29:32.809178   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:29:32.819091   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.823068   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.823119   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.828361   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:29:32.837721   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:29:32.847217   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.851176   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.851220   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.856303   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:29:32.865672   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:29:32.875695   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.879910   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.879961   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.885240   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:29:32.894951   67451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:29:32.899131   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:29:32.904465   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:29:32.910243   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:29:32.915874   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:29:32.921193   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:29:32.926569   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:29:32.931905   67451 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:29:32.932015   67451 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:29:32.932095   67451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:32.967184   67451 cri.go:89] found id: ""
	I0815 01:29:32.967270   67451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:29:32.977083   67451 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:29:32.977105   67451 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:29:32.977146   67451 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:29:32.986934   67451 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:29:32.988393   67451 kubeconfig.go:125] found "default-k8s-diff-port-018537" server: "https://192.168.39.223:8444"
	I0815 01:29:32.991478   67451 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:29:33.000175   67451 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.223
	I0815 01:29:33.000201   67451 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:29:33.000211   67451 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:29:33.000260   67451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:33.042092   67451 cri.go:89] found id: ""
	I0815 01:29:33.042173   67451 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:29:33.058312   67451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:29:33.067931   67451 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:29:33.067951   67451 kubeadm.go:157] found existing configuration files:
	
	I0815 01:29:33.068005   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0815 01:29:33.076467   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:29:33.076532   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:29:33.085318   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0815 01:29:33.093657   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:29:33.093710   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:29:33.102263   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0815 01:29:33.110120   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:29:33.110166   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:29:33.118497   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0815 01:29:33.126969   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:29:33.127017   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:29:33.135332   67451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:29:33.143869   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:33.257728   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.000703   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.223362   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.296248   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.400251   67451 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:29:34.400365   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.901010   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.515518   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.015099   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.514899   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.015422   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.515483   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.015471   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.515843   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.015059   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.514953   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.015692   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.283919   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:30.284357   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:30.284387   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:30.284314   68431 retry.go:31] will retry after 737.00152ms: waiting for machine to come up
	I0815 01:29:31.023083   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:31.023593   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:31.023620   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:31.023541   68431 retry.go:31] will retry after 851.229647ms: waiting for machine to come up
	I0815 01:29:31.876610   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:31.877022   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:31.877051   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:31.876972   68431 retry.go:31] will retry after 914.072719ms: waiting for machine to come up
	I0815 01:29:32.792245   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:32.792723   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:32.792749   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:32.792674   68431 retry.go:31] will retry after 1.383936582s: waiting for machine to come up
	I0815 01:29:34.178425   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:34.178889   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:34.178928   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:34.178825   68431 retry.go:31] will retry after 1.574004296s: waiting for machine to come up
	I0815 01:29:32.314820   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:34.812868   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:35.400782   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.900844   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.400575   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.900769   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.916400   67451 api_server.go:72] duration metric: took 2.516148893s to wait for apiserver process to appear ...
	I0815 01:29:36.916432   67451 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:29:36.916458   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.650207   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:39.650234   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:39.650246   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.704636   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:39.704687   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:39.917074   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.921711   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:39.921742   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:35.514869   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.015361   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.515461   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.015560   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.514995   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.015431   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.515382   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.014971   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.515702   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:40.015185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.754518   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:35.755025   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:35.755049   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:35.754951   68431 retry.go:31] will retry after 1.763026338s: waiting for machine to come up
	I0815 01:29:37.519406   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:37.519910   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:37.519940   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:37.519857   68431 retry.go:31] will retry after 1.953484546s: waiting for machine to come up
	I0815 01:29:39.475118   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:39.475481   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:39.475617   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:39.475446   68431 retry.go:31] will retry after 3.524055081s: waiting for machine to come up
	I0815 01:29:36.813811   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:39.312364   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:40.417362   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:40.421758   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:40.421793   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:40.917290   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:40.929914   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:40.929979   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:41.417095   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:41.422436   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 200:
	ok
	I0815 01:29:41.430162   67451 api_server.go:141] control plane version: v1.31.0
	I0815 01:29:41.430190   67451 api_server.go:131] duration metric: took 4.513750685s to wait for apiserver health ...
	I0815 01:29:41.430201   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:29:41.430210   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:41.432041   67451 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:29:41.433158   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:29:41.465502   67451 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:29:41.488013   67451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:29:41.500034   67451 system_pods.go:59] 8 kube-system pods found
	I0815 01:29:41.500063   67451 system_pods.go:61] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:29:41.500071   67451 system_pods.go:61] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:29:41.500087   67451 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:29:41.500098   67451 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:29:41.500102   67451 system_pods.go:61] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:29:41.500107   67451 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:29:41.500117   67451 system_pods.go:61] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:29:41.500120   67451 system_pods.go:61] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:29:41.500126   67451 system_pods.go:74] duration metric: took 12.091408ms to wait for pod list to return data ...
	I0815 01:29:41.500137   67451 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:29:41.505113   67451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:29:41.505137   67451 node_conditions.go:123] node cpu capacity is 2
	I0815 01:29:41.505154   67451 node_conditions.go:105] duration metric: took 5.005028ms to run NodePressure ...
	I0815 01:29:41.505170   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:41.761818   67451 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:29:41.767941   67451 kubeadm.go:739] kubelet initialised
	I0815 01:29:41.767972   67451 kubeadm.go:740] duration metric: took 6.119306ms waiting for restarted kubelet to initialise ...
	I0815 01:29:41.767980   67451 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:41.774714   67451 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.782833   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.782861   67451 pod_ready.go:81] duration metric: took 8.124705ms for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.782870   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.782877   67451 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.790225   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.790248   67451 pod_ready.go:81] duration metric: took 7.36386ms for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.790259   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.790265   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.797569   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.797592   67451 pod_ready.go:81] duration metric: took 7.320672ms for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.797605   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.797611   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.891391   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.891423   67451 pod_ready.go:81] duration metric: took 93.801865ms for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.891435   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.891442   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:42.291752   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-proxy-s8mfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.291780   67451 pod_ready.go:81] duration metric: took 400.332851ms for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:42.291789   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-proxy-s8mfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.291795   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:42.691923   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.691958   67451 pod_ready.go:81] duration metric: took 400.15227ms for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:42.691970   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.691977   67451 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:43.091932   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:43.091958   67451 pod_ready.go:81] duration metric: took 399.974795ms for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:43.091970   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:43.091976   67451 pod_ready.go:38] duration metric: took 1.323989077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:43.091990   67451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:29:43.103131   67451 ops.go:34] apiserver oom_adj: -16
	I0815 01:29:43.103155   67451 kubeadm.go:597] duration metric: took 10.126043167s to restartPrimaryControlPlane
	I0815 01:29:43.103165   67451 kubeadm.go:394] duration metric: took 10.171275892s to StartCluster
	I0815 01:29:43.103183   67451 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:43.103269   67451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:29:43.105655   67451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:43.105963   67451 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:29:43.106027   67451 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:29:43.106123   67451 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106142   67451 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106162   67451 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.106178   67451 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:29:43.106187   67451 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106200   67451 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-018537"
	I0815 01:29:43.106226   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.106255   67451 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.106274   67451 addons.go:243] addon metrics-server should already be in state true
	I0815 01:29:43.106203   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:43.106363   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.106702   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106731   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.106708   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106789   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106822   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.106963   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.107834   67451 out.go:177] * Verifying Kubernetes components...
	I0815 01:29:43.109186   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:43.127122   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46271
	I0815 01:29:43.127378   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I0815 01:29:43.127380   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I0815 01:29:43.127678   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.127791   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.128078   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.128296   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.128323   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.128466   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.128480   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.128671   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.128844   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.129231   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.129263   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.129768   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.129817   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.130089   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.130125   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.130219   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.130448   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.134347   67451 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.134366   67451 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:29:43.134394   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.134764   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.134801   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.148352   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0815 01:29:43.148713   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0815 01:29:43.148786   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.149196   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.149378   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.149420   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.149838   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.149863   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.149891   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.150092   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.150344   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.150698   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.152063   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.152848   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.154165   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0815 01:29:43.154664   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.155020   67451 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:43.155087   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.155110   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.155596   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.156124   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.156166   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.156340   67451 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:29:43.156366   67451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:29:43.156389   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.157988   67451 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:29:43.159283   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:29:43.159299   67451 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:29:43.159319   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.159668   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.160304   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.160373   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.160866   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.161069   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.161234   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.161395   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.162257   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.162673   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.162702   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.162838   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.163007   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.163179   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.163296   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.175175   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0815 01:29:43.175674   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.176169   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.176193   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.176566   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.176824   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.178342   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.178584   67451 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:29:43.178597   67451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:29:43.178615   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.181058   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.181448   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.181482   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.181577   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.181709   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.181791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.181873   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.318078   67451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:43.341037   67451 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-018537" to be "Ready" ...
	I0815 01:29:43.400964   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:29:43.400993   67451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:29:43.423693   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:29:43.423716   67451 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:29:43.430460   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:29:43.453562   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:29:43.453587   67451 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:29:43.457038   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:29:43.495707   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:29:44.708047   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.25097545s)
	I0815 01:29:44.708106   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708111   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.212373458s)
	I0815 01:29:44.708119   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708129   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708141   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708135   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.277646183s)
	I0815 01:29:44.708182   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708201   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708391   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708409   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708419   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708428   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708531   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.708562   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708568   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708577   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.708586   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708587   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708599   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708605   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708613   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708648   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708614   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708678   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.710192   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.710210   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.710220   67451 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-018537"
	I0815 01:29:44.710196   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.710447   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.710467   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.716452   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.716468   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.716716   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.716737   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.718650   67451 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0815 01:29:44.719796   67451 addons.go:510] duration metric: took 1.613772622s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0815 01:29:40.514981   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.015724   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.515316   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.515738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.515747   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.015794   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:45.015384   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.000581   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:43.001092   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:43.001116   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:43.001045   68431 retry.go:31] will retry after 4.175502286s: waiting for machine to come up
	I0815 01:29:41.313801   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:43.814135   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:47.178102   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.178637   66492 main.go:141] libmachine: (no-preload-884893) Found IP for machine: 192.168.61.166
	I0815 01:29:47.178665   66492 main.go:141] libmachine: (no-preload-884893) Reserving static IP address...
	I0815 01:29:47.178678   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has current primary IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.179108   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "no-preload-884893", mac: "52:54:00:b7:93:c6", ip: "192.168.61.166"} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.179151   66492 main.go:141] libmachine: (no-preload-884893) DBG | skip adding static IP to network mk-no-preload-884893 - found existing host DHCP lease matching {name: "no-preload-884893", mac: "52:54:00:b7:93:c6", ip: "192.168.61.166"}
	I0815 01:29:47.179169   66492 main.go:141] libmachine: (no-preload-884893) Reserved static IP address: 192.168.61.166
	I0815 01:29:47.179188   66492 main.go:141] libmachine: (no-preload-884893) Waiting for SSH to be available...
	I0815 01:29:47.179204   66492 main.go:141] libmachine: (no-preload-884893) DBG | Getting to WaitForSSH function...
	I0815 01:29:47.181522   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.181909   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.181937   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.182038   66492 main.go:141] libmachine: (no-preload-884893) DBG | Using SSH client type: external
	I0815 01:29:47.182070   66492 main.go:141] libmachine: (no-preload-884893) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa (-rw-------)
	I0815 01:29:47.182105   66492 main.go:141] libmachine: (no-preload-884893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.166 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:47.182126   66492 main.go:141] libmachine: (no-preload-884893) DBG | About to run SSH command:
	I0815 01:29:47.182156   66492 main.go:141] libmachine: (no-preload-884893) DBG | exit 0
	I0815 01:29:47.309068   66492 main.go:141] libmachine: (no-preload-884893) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:47.309492   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetConfigRaw
	I0815 01:29:47.310181   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:47.312956   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.313296   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.313327   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.313503   66492 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/config.json ...
	I0815 01:29:47.313720   66492 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:47.313742   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:47.313965   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.315987   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.316252   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.316278   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.316399   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.316555   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.316741   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.316886   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.317071   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.317250   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.317263   66492 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:47.424862   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:47.424894   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.425125   66492 buildroot.go:166] provisioning hostname "no-preload-884893"
	I0815 01:29:47.425156   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.425353   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.428397   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.428802   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.428825   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.429003   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.429185   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.429336   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.429464   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.429650   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.429863   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.429881   66492 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-884893 && echo "no-preload-884893" | sudo tee /etc/hostname
	I0815 01:29:47.552134   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-884893
	
	I0815 01:29:47.552159   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.554997   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.555458   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.555500   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.555742   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.555975   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.556148   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.556320   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.556525   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.556707   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.556733   66492 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-884893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-884893/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-884893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:47.673572   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:47.673608   66492 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:47.673637   66492 buildroot.go:174] setting up certificates
	I0815 01:29:47.673653   66492 provision.go:84] configureAuth start
	I0815 01:29:47.673670   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.674016   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:47.677054   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.677491   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.677526   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.677588   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.680115   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.680510   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.680539   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.680719   66492 provision.go:143] copyHostCerts
	I0815 01:29:47.680772   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:47.680789   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:47.680846   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:47.680962   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:47.680970   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:47.680992   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:47.681057   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:47.681064   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:47.681081   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:47.681129   66492 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.no-preload-884893 san=[127.0.0.1 192.168.61.166 localhost minikube no-preload-884893]
	I0815 01:29:47.828342   66492 provision.go:177] copyRemoteCerts
	I0815 01:29:47.828395   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:47.828416   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.831163   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.831546   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.831576   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.831760   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.831948   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.832109   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.832218   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:47.914745   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:47.938252   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 01:29:47.960492   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:29:47.982681   66492 provision.go:87] duration metric: took 309.010268ms to configureAuth
	I0815 01:29:47.982714   66492 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:47.982971   66492 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:47.983095   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.985798   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.986181   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.986213   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.986383   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.986584   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.986748   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.986935   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.987115   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.987328   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.987346   66492 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:48.264004   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:48.264027   66492 machine.go:97] duration metric: took 950.293757ms to provisionDockerMachine
	I0815 01:29:48.264037   66492 start.go:293] postStartSetup for "no-preload-884893" (driver="kvm2")
	I0815 01:29:48.264047   66492 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:48.264060   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.264375   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:48.264401   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.267376   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.267859   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.267888   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.268115   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.268334   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.268521   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.268713   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.351688   66492 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:48.356871   66492 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:48.356897   66492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:48.356977   66492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:48.357078   66492 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:48.357194   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:48.369590   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:48.397339   66492 start.go:296] duration metric: took 133.287217ms for postStartSetup
	I0815 01:29:48.397389   66492 fix.go:56] duration metric: took 21.196078137s for fixHost
	I0815 01:29:48.397434   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.400353   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.400792   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.400831   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.401118   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.401352   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.401509   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.401707   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.401914   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:48.402132   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:48.402148   66492 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:48.518704   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685388.495787154
	
	I0815 01:29:48.518731   66492 fix.go:216] guest clock: 1723685388.495787154
	I0815 01:29:48.518743   66492 fix.go:229] Guest: 2024-08-15 01:29:48.495787154 +0000 UTC Remote: 2024-08-15 01:29:48.397394567 +0000 UTC m=+358.213942436 (delta=98.392587ms)
	I0815 01:29:48.518771   66492 fix.go:200] guest clock delta is within tolerance: 98.392587ms
	I0815 01:29:48.518779   66492 start.go:83] releasing machines lock for "no-preload-884893", held for 21.317569669s
	I0815 01:29:48.518808   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.519146   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:48.522001   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.522428   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.522461   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.522626   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523145   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523490   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523580   66492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:48.523634   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.523747   66492 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:48.523768   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.527031   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527128   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527408   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.527473   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527563   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.527592   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527709   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.527781   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.527943   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.528173   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.528177   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.528305   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.528417   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.528598   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.610614   66492 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:48.647464   66492 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:48.786666   66492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:48.792525   66492 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:48.792593   66492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:48.807904   66492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:48.807924   66492 start.go:495] detecting cgroup driver to use...
	I0815 01:29:48.807975   66492 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:48.826113   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:48.839376   66492 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:48.839443   66492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:48.852840   66492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:48.866029   66492 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:48.974628   66492 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:49.141375   66492 docker.go:233] disabling docker service ...
	I0815 01:29:49.141447   66492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:49.155650   66492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:49.168527   66492 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:49.295756   66492 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:49.430096   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:49.443508   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:49.460504   66492 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:49.460567   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.470309   66492 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:49.470376   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.480340   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.490326   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.500831   66492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:49.511629   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.522350   66492 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.541871   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.553334   66492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:49.562756   66492 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:49.562817   66492 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:49.575907   66492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:49.586017   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:49.709089   66492 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:49.848506   66492 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:49.848599   66492 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:49.853379   66492 start.go:563] Will wait 60s for crictl version
	I0815 01:29:49.853442   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:49.857695   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:49.897829   66492 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:49.897909   66492 ssh_runner.go:195] Run: crio --version
	I0815 01:29:49.927253   66492 ssh_runner.go:195] Run: crio --version
	I0815 01:29:49.956689   66492 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:45.345209   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:47.844877   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:49.845546   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:45.515828   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.015564   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.515829   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.014916   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.515308   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.014871   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.515182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.015946   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.514892   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.015788   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.957823   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:49.960376   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:49.960741   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:49.960771   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:49.960975   66492 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:49.964703   66492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:49.975918   66492 kubeadm.go:883] updating cluster {Name:no-preload-884893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:49.976078   66492 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:49.976130   66492 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:50.007973   66492 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:50.007997   66492 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:29:50.008034   66492 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:50.008076   66492 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.008092   66492 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.008147   66492 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 01:29:50.008167   66492 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.008238   66492 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.008261   66492 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.008535   66492 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.009666   66492 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.009734   66492 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.009745   66492 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:50.009748   66492 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.009734   66492 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.009768   66492 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.009775   66492 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.009801   66492 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 01:29:46.312368   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:48.312568   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.313249   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.347683   67451 node_ready.go:49] node "default-k8s-diff-port-018537" has status "Ready":"True"
	I0815 01:29:50.347704   67451 node_ready.go:38] duration metric: took 7.006638337s for node "default-k8s-diff-port-018537" to be "Ready" ...
	I0815 01:29:50.347713   67451 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:50.358505   67451 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.364110   67451 pod_ready.go:92] pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.364139   67451 pod_ready.go:81] duration metric: took 5.600464ms for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.364150   67451 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.370186   67451 pod_ready.go:92] pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.370212   67451 pod_ready.go:81] duration metric: took 6.054189ms for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.370223   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.380051   67451 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.380089   67451 pod_ready.go:81] duration metric: took 9.848463ms for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.380107   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.385988   67451 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.386015   67451 pod_ready.go:81] duration metric: took 2.005899675s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.386027   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.390635   67451 pod_ready.go:92] pod "kube-proxy-s8mfb" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.390654   67451 pod_ready.go:81] duration metric: took 4.620554ms for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.390663   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.745424   67451 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.745447   67451 pod_ready.go:81] duration metric: took 354.777631ms for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.745458   67451 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:54.752243   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.515037   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.015346   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.514948   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.015826   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.514876   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.015522   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.515665   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.015480   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.515202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:55.014921   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.224358   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.237723   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0815 01:29:50.240904   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.273259   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.275978   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.277287   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.293030   66492 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0815 01:29:50.293078   66492 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.293135   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.293169   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.425265   66492 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0815 01:29:50.425285   66492 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0815 01:29:50.425307   66492 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0815 01:29:50.425319   66492 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.425319   66492 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.425326   66492 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.425367   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425374   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425375   66492 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0815 01:29:50.425390   66492 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.425415   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425409   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425427   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.425436   66492 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0815 01:29:50.425451   66492 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.425471   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.438767   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.438827   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.477250   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.477290   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.477347   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.477399   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.507338   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.527412   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.618767   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.623557   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.623650   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.623741   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.623773   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.668092   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.738811   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.747865   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 01:29:50.747932   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.747953   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 01:29:50.747983   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.748016   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:50.748026   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.777047   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 01:29:50.777152   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:50.811559   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 01:29:50.811678   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:50.829106   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0815 01:29:50.829115   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0815 01:29:50.829131   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.829161   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0815 01:29:50.829178   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.829206   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:29:50.829276   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 01:29:50.829287   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0815 01:29:50.829319   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0815 01:29:50.829360   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:50.833595   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0815 01:29:50.869008   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.899406   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.070205124s)
	I0815 01:29:52.899446   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0815 01:29:52.899444   66492 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0: (2.070218931s)
	I0815 01:29:52.899466   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:52.899475   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0815 01:29:52.899477   66492 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.03044186s)
	I0815 01:29:52.899510   66492 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0815 01:29:52.899516   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:52.899534   66492 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.899573   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:54.750498   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.850957835s)
	I0815 01:29:54.750533   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0815 01:29:54.750530   66492 ssh_runner.go:235] Completed: which crictl: (1.850936309s)
	I0815 01:29:54.750567   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:54.750593   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:54.750609   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:54.787342   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.314561   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:54.813265   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:56.752530   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:58.752625   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:55.515921   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:55.516020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:55.556467   66919 cri.go:89] found id: ""
	I0815 01:29:55.556495   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.556506   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:55.556514   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:55.556584   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:55.591203   66919 cri.go:89] found id: ""
	I0815 01:29:55.591227   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.591234   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:55.591240   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:55.591319   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:55.628819   66919 cri.go:89] found id: ""
	I0815 01:29:55.628847   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.628858   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:55.628865   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:55.628934   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:55.673750   66919 cri.go:89] found id: ""
	I0815 01:29:55.673779   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.673790   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:55.673798   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:55.673857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:55.717121   66919 cri.go:89] found id: ""
	I0815 01:29:55.717153   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.717164   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:55.717171   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:55.717233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:55.753387   66919 cri.go:89] found id: ""
	I0815 01:29:55.753415   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.753425   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:55.753434   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:55.753507   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:55.787148   66919 cri.go:89] found id: ""
	I0815 01:29:55.787183   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.787194   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:55.787207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:55.787272   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:55.820172   66919 cri.go:89] found id: ""
	I0815 01:29:55.820212   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.820226   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:55.820238   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:55.820260   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:55.869089   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:55.869120   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:55.882614   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:55.882644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:56.004286   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:56.004364   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:56.004382   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:56.077836   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:56.077873   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:58.628976   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:58.642997   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:58.643074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:58.675870   66919 cri.go:89] found id: ""
	I0815 01:29:58.675906   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.675916   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:58.675921   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:58.675971   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:58.708231   66919 cri.go:89] found id: ""
	I0815 01:29:58.708263   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.708271   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:58.708277   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:58.708347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:58.744121   66919 cri.go:89] found id: ""
	I0815 01:29:58.744151   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.744162   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:58.744169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:58.744231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:58.783191   66919 cri.go:89] found id: ""
	I0815 01:29:58.783225   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.783238   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:58.783246   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:58.783315   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:58.821747   66919 cri.go:89] found id: ""
	I0815 01:29:58.821775   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.821785   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:58.821801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:58.821865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:58.859419   66919 cri.go:89] found id: ""
	I0815 01:29:58.859450   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.859458   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:58.859463   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:58.859520   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:58.900959   66919 cri.go:89] found id: ""
	I0815 01:29:58.900988   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.900999   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:58.901006   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:58.901069   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:58.940714   66919 cri.go:89] found id: ""
	I0815 01:29:58.940746   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.940758   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:58.940779   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:58.940796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:58.956973   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:58.957004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:59.024399   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:59.024426   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:59.024439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:59.106170   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:59.106210   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:59.142151   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:59.142181   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:56.948465   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.1978264s)
	I0815 01:29:56.948496   66492 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.161116111s)
	I0815 01:29:56.948602   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:56.948503   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0815 01:29:56.948644   66492 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:56.948718   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:56.985210   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 01:29:56.985331   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:29:58.731174   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.782427987s)
	I0815 01:29:58.731211   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0815 01:29:58.731234   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:58.731284   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:58.731184   66492 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.745828896s)
	I0815 01:29:58.731343   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0815 01:29:57.313743   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:59.814068   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:00.752802   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:02.752939   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:01.696371   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:01.709675   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:01.709748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:01.747907   66919 cri.go:89] found id: ""
	I0815 01:30:01.747934   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.747941   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:01.747949   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:01.748009   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:01.785404   66919 cri.go:89] found id: ""
	I0815 01:30:01.785429   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.785437   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:01.785442   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:01.785499   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:01.820032   66919 cri.go:89] found id: ""
	I0815 01:30:01.820060   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.820068   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:01.820073   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:01.820134   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:01.853219   66919 cri.go:89] found id: ""
	I0815 01:30:01.853257   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.853268   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:01.853276   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:01.853331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:01.895875   66919 cri.go:89] found id: ""
	I0815 01:30:01.895903   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.895915   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:01.895922   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:01.895983   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:01.929753   66919 cri.go:89] found id: ""
	I0815 01:30:01.929785   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.929796   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:01.929803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:01.929865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:01.961053   66919 cri.go:89] found id: ""
	I0815 01:30:01.961087   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.961099   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:01.961107   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:01.961174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:01.993217   66919 cri.go:89] found id: ""
	I0815 01:30:01.993247   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.993258   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:01.993268   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:01.993287   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:02.051367   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:02.051400   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:02.065818   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:02.065851   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:02.150692   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:02.150721   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:02.150738   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:02.262369   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:02.262406   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:04.813873   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:04.829471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:04.829549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:04.871020   66919 cri.go:89] found id: ""
	I0815 01:30:04.871049   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.871058   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:04.871064   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:04.871131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:04.924432   66919 cri.go:89] found id: ""
	I0815 01:30:04.924462   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.924474   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:04.924480   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:04.924543   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:04.972947   66919 cri.go:89] found id: ""
	I0815 01:30:04.972979   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.972991   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:04.972999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:04.973123   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:05.004748   66919 cri.go:89] found id: ""
	I0815 01:30:05.004772   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.004780   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:05.004785   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:05.004850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:05.036064   66919 cri.go:89] found id: ""
	I0815 01:30:05.036093   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.036103   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:05.036110   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:05.036174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:05.074397   66919 cri.go:89] found id: ""
	I0815 01:30:05.074430   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.074457   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:05.074467   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:05.074527   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:05.110796   66919 cri.go:89] found id: ""
	I0815 01:30:05.110821   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.110830   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:05.110836   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:05.110897   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:00.606670   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.875360613s)
	I0815 01:30:00.606701   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0815 01:30:00.606725   66492 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:30:00.606772   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:30:04.297747   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.690945823s)
	I0815 01:30:04.297780   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0815 01:30:04.297811   66492 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:30:04.297881   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:30:05.049009   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 01:30:05.049059   66492 cache_images.go:123] Successfully loaded all cached images
	I0815 01:30:05.049067   66492 cache_images.go:92] duration metric: took 15.041058069s to LoadCachedImages
	I0815 01:30:05.049083   66492 kubeadm.go:934] updating node { 192.168.61.166 8443 v1.31.0 crio true true} ...
	I0815 01:30:05.049215   66492 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-884893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:30:05.049295   66492 ssh_runner.go:195] Run: crio config
	I0815 01:30:05.101896   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:30:05.101915   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:30:05.101925   66492 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:30:05.101953   66492 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.166 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-884893 NodeName:no-preload-884893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:30:05.102129   66492 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.166
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-884893"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.166
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.166"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:30:05.102202   66492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:30:05.114396   66492 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:30:05.114464   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:30:05.124036   66492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0815 01:30:05.141411   66492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:30:05.156888   66492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0815 01:30:05.173630   66492 ssh_runner.go:195] Run: grep 192.168.61.166	control-plane.minikube.internal$ /etc/hosts
	I0815 01:30:05.177421   66492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.166	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:30:05.188839   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:30:02.313495   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:04.812529   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.252826   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:07.254206   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.753065   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.148938   66919 cri.go:89] found id: ""
	I0815 01:30:05.148960   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.148968   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:05.148976   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:05.148986   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:05.202523   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:05.202553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:05.215903   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:05.215935   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:05.294685   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:05.294709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:05.294724   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:05.397494   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:05.397529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:07.946734   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.967265   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:07.967341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:08.005761   66919 cri.go:89] found id: ""
	I0815 01:30:08.005792   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.005808   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:08.005814   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:08.005878   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:08.044124   66919 cri.go:89] found id: ""
	I0815 01:30:08.044154   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.044166   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:08.044173   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:08.044238   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:08.078729   66919 cri.go:89] found id: ""
	I0815 01:30:08.078757   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.078769   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:08.078777   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:08.078841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:08.121988   66919 cri.go:89] found id: ""
	I0815 01:30:08.122020   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.122035   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:08.122042   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:08.122108   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:08.156930   66919 cri.go:89] found id: ""
	I0815 01:30:08.156956   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.156964   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:08.156969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:08.157034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:08.201008   66919 cri.go:89] found id: ""
	I0815 01:30:08.201049   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.201060   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:08.201067   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:08.201128   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:08.241955   66919 cri.go:89] found id: ""
	I0815 01:30:08.241979   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.241987   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:08.241993   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:08.242041   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:08.277271   66919 cri.go:89] found id: ""
	I0815 01:30:08.277307   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.277317   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:08.277328   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:08.277343   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:08.339037   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:08.339082   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:08.355588   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:08.355617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:08.436131   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:08.436157   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:08.436170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:08.541231   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:08.541267   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:05.307306   66492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:30:05.326586   66492 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893 for IP: 192.168.61.166
	I0815 01:30:05.326606   66492 certs.go:194] generating shared ca certs ...
	I0815 01:30:05.326620   66492 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:30:05.326754   66492 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:30:05.326798   66492 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:30:05.326807   66492 certs.go:256] generating profile certs ...
	I0815 01:30:05.326885   66492 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.key
	I0815 01:30:05.326942   66492 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.key.2b09f8c1
	I0815 01:30:05.326975   66492 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.key
	I0815 01:30:05.327152   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:30:05.327216   66492 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:30:05.327231   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:30:05.327260   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:30:05.327292   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:30:05.327315   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:30:05.327353   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:30:05.328116   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:30:05.358988   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:30:05.386047   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:30:05.422046   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:30:05.459608   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 01:30:05.489226   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:30:05.518361   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:30:05.542755   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:30:05.567485   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:30:05.590089   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:30:05.614248   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:30:05.636932   66492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:30:05.652645   66492 ssh_runner.go:195] Run: openssl version
	I0815 01:30:05.658261   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:30:05.668530   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.673009   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.673091   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.678803   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:30:05.689237   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:30:05.699211   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.703378   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.703430   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.708890   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:30:05.718664   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:30:05.729058   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.733298   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.733352   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.738793   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:30:05.749007   66492 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:30:05.753780   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:30:05.759248   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:30:05.764978   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:30:05.770728   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:30:05.775949   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:30:05.781530   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:30:05.786881   66492 kubeadm.go:392] StartCluster: {Name:no-preload-884893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:30:05.786997   66492 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:30:05.787058   66492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:30:05.821591   66492 cri.go:89] found id: ""
	I0815 01:30:05.821662   66492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:30:05.832115   66492 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:30:05.832135   66492 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:30:05.832192   66492 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:30:05.841134   66492 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:30:05.842134   66492 kubeconfig.go:125] found "no-preload-884893" server: "https://192.168.61.166:8443"
	I0815 01:30:05.844248   66492 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:30:05.853112   66492 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.166
	I0815 01:30:05.853149   66492 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:30:05.853161   66492 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:30:05.853200   66492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:30:05.887518   66492 cri.go:89] found id: ""
	I0815 01:30:05.887591   66492 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:30:05.905394   66492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:30:05.914745   66492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:30:05.914763   66492 kubeadm.go:157] found existing configuration files:
	
	I0815 01:30:05.914812   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:30:05.924190   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:30:05.924244   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:30:05.933573   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:30:05.942352   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:30:05.942419   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:30:05.951109   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:30:05.959593   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:30:05.959656   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:30:05.968126   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:30:05.976084   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:30:05.976145   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:30:05.984770   66492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:30:05.993658   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:06.089280   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:06.949649   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.160787   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.231870   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.368542   66492 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:30:07.368644   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.868980   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:08.369588   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:08.395734   66492 api_server.go:72] duration metric: took 1.027190846s to wait for apiserver process to appear ...
	I0815 01:30:08.395760   66492 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:30:08.395782   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:07.313709   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.812159   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.394556   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.394591   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.394610   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.433312   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.433352   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.433366   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.450472   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.450507   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.895986   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.900580   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:30:11.900612   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:30:12.396449   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:12.402073   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:30:12.402097   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:30:12.896742   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:12.902095   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 200:
	ok
	I0815 01:30:12.909261   66492 api_server.go:141] control plane version: v1.31.0
	I0815 01:30:12.909292   66492 api_server.go:131] duration metric: took 4.513523262s to wait for apiserver health ...
	I0815 01:30:12.909304   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:30:12.909312   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:30:12.911002   66492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:30:12.252177   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:14.253401   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.090797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:11.105873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:11.105951   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:11.139481   66919 cri.go:89] found id: ""
	I0815 01:30:11.139509   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.139520   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:11.139528   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:11.139586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:11.176291   66919 cri.go:89] found id: ""
	I0815 01:30:11.176320   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.176329   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:11.176336   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:11.176408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:11.212715   66919 cri.go:89] found id: ""
	I0815 01:30:11.212750   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.212760   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:11.212766   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:11.212824   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:11.247283   66919 cri.go:89] found id: ""
	I0815 01:30:11.247311   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.247321   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:11.247328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:11.247391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:11.280285   66919 cri.go:89] found id: ""
	I0815 01:30:11.280319   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.280332   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:11.280339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:11.280407   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:11.317883   66919 cri.go:89] found id: ""
	I0815 01:30:11.317911   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.317930   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:11.317937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:11.317998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:11.355178   66919 cri.go:89] found id: ""
	I0815 01:30:11.355208   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.355220   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:11.355227   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:11.355287   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:11.390965   66919 cri.go:89] found id: ""
	I0815 01:30:11.390992   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.391004   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:11.391015   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:11.391030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:11.445967   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:11.446004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:11.460539   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:11.460570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:11.537022   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:11.537043   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:11.537058   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:11.625438   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:11.625476   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.175870   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:14.189507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:14.189576   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:14.225227   66919 cri.go:89] found id: ""
	I0815 01:30:14.225255   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.225264   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:14.225271   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:14.225350   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:14.260247   66919 cri.go:89] found id: ""
	I0815 01:30:14.260276   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.260286   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:14.260294   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:14.260364   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:14.295498   66919 cri.go:89] found id: ""
	I0815 01:30:14.295528   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.295538   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:14.295552   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:14.295617   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:14.334197   66919 cri.go:89] found id: ""
	I0815 01:30:14.334228   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.334239   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:14.334247   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:14.334308   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:14.376198   66919 cri.go:89] found id: ""
	I0815 01:30:14.376232   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.376244   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:14.376252   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:14.376313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:14.416711   66919 cri.go:89] found id: ""
	I0815 01:30:14.416744   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.416755   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:14.416763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:14.416823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:14.453890   66919 cri.go:89] found id: ""
	I0815 01:30:14.453917   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.453930   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:14.453952   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:14.454024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:14.497742   66919 cri.go:89] found id: ""
	I0815 01:30:14.497768   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.497776   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:14.497787   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:14.497803   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:14.511938   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:14.511980   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:14.583464   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:14.583490   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:14.583510   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:14.683497   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:14.683540   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.724290   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:14.724327   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:12.912470   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:30:12.924194   66492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:30:12.943292   66492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:30:12.957782   66492 system_pods.go:59] 8 kube-system pods found
	I0815 01:30:12.957825   66492 system_pods.go:61] "coredns-6f6b679f8f-flg2c" [637e4479-8f63-481a-b3d8-c5c4a35ca60a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:30:12.957836   66492 system_pods.go:61] "etcd-no-preload-884893" [f786f812-e4b8-41d4-bf09-1350fee38efb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:30:12.957848   66492 system_pods.go:61] "kube-apiserver-no-preload-884893" [128cfe47-3a25-4d2c-8869-0d2aafa69852] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:30:12.957859   66492 system_pods.go:61] "kube-controller-manager-no-preload-884893" [e1cce704-2092-4350-8b2d-a96b4cb90969] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:30:12.957870   66492 system_pods.go:61] "kube-proxy-l559z" [67d270af-bcf3-4c4a-a917-84a3b4477a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 01:30:12.957889   66492 system_pods.go:61] "kube-scheduler-no-preload-884893" [004b37a2-58c2-431d-b43e-de894b7fa8ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:30:12.957900   66492 system_pods.go:61] "metrics-server-6867b74b74-qnnqs" [397b72b1-60cb-41b6-88c4-cb0c3d9200da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:30:12.957909   66492 system_pods.go:61] "storage-provisioner" [bd489c40-fcf4-400d-af4c-913b511494e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 01:30:12.957919   66492 system_pods.go:74] duration metric: took 14.600496ms to wait for pod list to return data ...
	I0815 01:30:12.957934   66492 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:30:12.964408   66492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:30:12.964437   66492 node_conditions.go:123] node cpu capacity is 2
	I0815 01:30:12.964448   66492 node_conditions.go:105] duration metric: took 6.509049ms to run NodePressure ...
	I0815 01:30:12.964466   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:13.242145   66492 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:30:13.247986   66492 kubeadm.go:739] kubelet initialised
	I0815 01:30:13.248012   66492 kubeadm.go:740] duration metric: took 5.831891ms waiting for restarted kubelet to initialise ...
	I0815 01:30:13.248021   66492 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:30:13.254140   66492 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.260351   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.260378   66492 pod_ready.go:81] duration metric: took 6.20764ms for pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.260388   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.260408   66492 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.265440   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "etcd-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.265464   66492 pod_ready.go:81] duration metric: took 5.046431ms for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.265474   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "etcd-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.265481   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.271153   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "kube-apiserver-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.271173   66492 pod_ready.go:81] duration metric: took 5.686045ms for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.271181   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "kube-apiserver-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.271187   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.346976   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.347001   66492 pod_ready.go:81] duration metric: took 75.806932ms for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.347011   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.347018   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l559z" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.748456   66492 pod_ready.go:92] pod "kube-proxy-l559z" in "kube-system" namespace has status "Ready":"True"
	I0815 01:30:13.748480   66492 pod_ready.go:81] duration metric: took 401.453111ms for pod "kube-proxy-l559z" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.748491   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:11.812458   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:13.813405   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:16.752797   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:19.251123   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:17.277116   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:17.290745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:17.290825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:17.324477   66919 cri.go:89] found id: ""
	I0815 01:30:17.324505   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.324512   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:17.324517   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:17.324573   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:17.356340   66919 cri.go:89] found id: ""
	I0815 01:30:17.356373   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.356384   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:17.356392   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:17.356452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:17.392696   66919 cri.go:89] found id: ""
	I0815 01:30:17.392722   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.392732   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:17.392740   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:17.392802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:17.425150   66919 cri.go:89] found id: ""
	I0815 01:30:17.425182   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.425192   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:17.425200   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:17.425266   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:17.460679   66919 cri.go:89] found id: ""
	I0815 01:30:17.460708   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.460720   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:17.460727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:17.460805   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:17.496881   66919 cri.go:89] found id: ""
	I0815 01:30:17.496914   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.496927   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:17.496933   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:17.496985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:17.528614   66919 cri.go:89] found id: ""
	I0815 01:30:17.528643   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.528668   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:17.528676   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:17.528736   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:17.563767   66919 cri.go:89] found id: ""
	I0815 01:30:17.563792   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.563799   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:17.563809   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:17.563824   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:17.576591   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:17.576619   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:17.647791   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:17.647819   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:17.647832   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:17.722889   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:17.722927   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:17.761118   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:17.761154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:15.756386   66492 pod_ready.go:102] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.255794   66492 pod_ready.go:102] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:19.754538   66492 pod_ready.go:92] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:30:19.754560   66492 pod_ready.go:81] duration metric: took 6.006061814s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:19.754569   66492 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:16.313295   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.313960   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:21.252528   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.753406   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:20.316550   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:20.329377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:20.329452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:20.361773   66919 cri.go:89] found id: ""
	I0815 01:30:20.361805   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.361814   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:20.361820   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:20.361880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:20.394091   66919 cri.go:89] found id: ""
	I0815 01:30:20.394127   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.394138   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:20.394145   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:20.394210   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:20.426882   66919 cri.go:89] found id: ""
	I0815 01:30:20.426910   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.426929   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:20.426937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:20.426998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:20.460629   66919 cri.go:89] found id: ""
	I0815 01:30:20.460678   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.460692   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:20.460699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:20.460764   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:20.492030   66919 cri.go:89] found id: ""
	I0815 01:30:20.492055   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.492063   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:20.492069   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:20.492127   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:20.523956   66919 cri.go:89] found id: ""
	I0815 01:30:20.523986   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.523994   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:20.523999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:20.524058   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:20.556577   66919 cri.go:89] found id: ""
	I0815 01:30:20.556606   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.556617   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:20.556633   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:20.556714   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:20.589322   66919 cri.go:89] found id: ""
	I0815 01:30:20.589357   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.589366   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:20.589374   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:20.589386   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:20.666950   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:20.666993   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:20.703065   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:20.703104   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:20.758120   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:20.758154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:20.773332   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:20.773378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:20.839693   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.340487   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:23.352978   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:23.353034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:23.386376   66919 cri.go:89] found id: ""
	I0815 01:30:23.386401   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.386411   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:23.386418   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:23.386480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:23.422251   66919 cri.go:89] found id: ""
	I0815 01:30:23.422275   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.422283   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:23.422288   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:23.422347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:23.454363   66919 cri.go:89] found id: ""
	I0815 01:30:23.454394   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.454405   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:23.454410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:23.454471   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:23.487211   66919 cri.go:89] found id: ""
	I0815 01:30:23.487240   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.487249   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:23.487255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:23.487313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:23.518655   66919 cri.go:89] found id: ""
	I0815 01:30:23.518680   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.518690   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:23.518695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:23.518749   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:23.553449   66919 cri.go:89] found id: ""
	I0815 01:30:23.553479   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.553489   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:23.553497   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:23.553549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:23.582407   66919 cri.go:89] found id: ""
	I0815 01:30:23.582443   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.582459   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:23.582466   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:23.582519   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:23.612805   66919 cri.go:89] found id: ""
	I0815 01:30:23.612839   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.612849   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:23.612861   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:23.612874   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:23.661661   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:23.661691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:23.674456   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:23.674491   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:23.742734   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.742758   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:23.742772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:23.828791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:23.828830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:21.761680   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.763406   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:20.812796   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.312044   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:25.312289   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:26.252305   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:28.752410   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:26.364924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:26.378354   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:26.378422   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:26.410209   66919 cri.go:89] found id: ""
	I0815 01:30:26.410238   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.410248   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:26.410253   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:26.410299   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:26.443885   66919 cri.go:89] found id: ""
	I0815 01:30:26.443918   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.443929   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:26.443935   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:26.443985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:26.475786   66919 cri.go:89] found id: ""
	I0815 01:30:26.475815   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.475826   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:26.475833   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:26.475898   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:26.510635   66919 cri.go:89] found id: ""
	I0815 01:30:26.510660   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.510669   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:26.510677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:26.510739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:26.542755   66919 cri.go:89] found id: ""
	I0815 01:30:26.542779   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.542787   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:26.542792   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:26.542842   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:26.574825   66919 cri.go:89] found id: ""
	I0815 01:30:26.574896   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.574911   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:26.574919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:26.574979   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:26.612952   66919 cri.go:89] found id: ""
	I0815 01:30:26.612980   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.612991   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:26.612998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:26.613067   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:26.645339   66919 cri.go:89] found id: ""
	I0815 01:30:26.645377   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.645388   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:26.645398   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:26.645415   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:26.659206   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:26.659243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:26.727526   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:26.727552   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:26.727569   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.811277   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:26.811314   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:26.851236   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:26.851270   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.402571   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:29.415017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:29.415095   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:29.448130   66919 cri.go:89] found id: ""
	I0815 01:30:29.448151   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.448159   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:29.448164   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:29.448213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:29.484156   66919 cri.go:89] found id: ""
	I0815 01:30:29.484186   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.484195   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:29.484200   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:29.484248   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:29.519760   66919 cri.go:89] found id: ""
	I0815 01:30:29.519796   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.519806   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:29.519812   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:29.519864   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:29.551336   66919 cri.go:89] found id: ""
	I0815 01:30:29.551363   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.551372   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:29.551377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:29.551428   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:29.584761   66919 cri.go:89] found id: ""
	I0815 01:30:29.584793   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.584804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:29.584811   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:29.584875   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:29.619310   66919 cri.go:89] found id: ""
	I0815 01:30:29.619335   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.619343   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:29.619351   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:29.619408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:29.653976   66919 cri.go:89] found id: ""
	I0815 01:30:29.654005   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.654016   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:29.654030   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:29.654104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:29.685546   66919 cri.go:89] found id: ""
	I0815 01:30:29.685581   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.685588   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:29.685598   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:29.685613   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:29.720766   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:29.720797   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.771174   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:29.771207   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:29.783951   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:29.783979   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:29.853602   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:29.853622   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:29.853634   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.259774   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:28.260345   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:27.312379   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:29.312991   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:31.253803   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:33.752012   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.434032   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:32.447831   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:32.447900   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:32.479056   66919 cri.go:89] found id: ""
	I0815 01:30:32.479086   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.479096   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:32.479102   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:32.479167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:32.511967   66919 cri.go:89] found id: ""
	I0815 01:30:32.512002   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.512014   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:32.512022   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:32.512094   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:32.547410   66919 cri.go:89] found id: ""
	I0815 01:30:32.547433   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.547441   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:32.547446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:32.547494   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:32.580829   66919 cri.go:89] found id: ""
	I0815 01:30:32.580857   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.580867   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:32.580874   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:32.580941   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:32.613391   66919 cri.go:89] found id: ""
	I0815 01:30:32.613502   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.613518   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:32.613529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:32.613619   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:32.645703   66919 cri.go:89] found id: ""
	I0815 01:30:32.645736   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.645747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:32.645754   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:32.645822   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:32.677634   66919 cri.go:89] found id: ""
	I0815 01:30:32.677667   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.677678   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:32.677685   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:32.677740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:32.708400   66919 cri.go:89] found id: ""
	I0815 01:30:32.708481   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.708506   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:32.708521   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:32.708538   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:32.759869   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:32.759907   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:32.773110   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:32.773131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:32.840010   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:32.840031   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:32.840045   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:32.915894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:32.915948   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:30.261620   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.760735   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:34.761802   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:31.813543   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:33.813715   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.752452   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:37.752484   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.752536   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.461001   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:35.473803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:35.473874   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:35.506510   66919 cri.go:89] found id: ""
	I0815 01:30:35.506532   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.506540   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:35.506546   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:35.506593   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:35.540988   66919 cri.go:89] found id: ""
	I0815 01:30:35.541018   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.541028   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:35.541033   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:35.541084   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:35.575687   66919 cri.go:89] found id: ""
	I0815 01:30:35.575713   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.575723   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:35.575730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:35.575789   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:35.606841   66919 cri.go:89] found id: ""
	I0815 01:30:35.606871   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.606878   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:35.606884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:35.606940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:35.641032   66919 cri.go:89] found id: ""
	I0815 01:30:35.641067   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.641079   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:35.641086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:35.641150   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:35.676347   66919 cri.go:89] found id: ""
	I0815 01:30:35.676381   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.676422   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:35.676433   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:35.676497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:35.713609   66919 cri.go:89] found id: ""
	I0815 01:30:35.713634   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.713648   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:35.713655   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:35.713739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:35.751057   66919 cri.go:89] found id: ""
	I0815 01:30:35.751083   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.751094   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:35.751104   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:35.751119   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:35.822909   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:35.822935   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:35.822950   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:35.904146   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:35.904186   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:35.942285   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:35.942316   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:35.990920   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:35.990959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.504900   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:38.518230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:38.518301   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:38.552402   66919 cri.go:89] found id: ""
	I0815 01:30:38.552428   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.552436   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:38.552441   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:38.552500   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:38.588617   66919 cri.go:89] found id: ""
	I0815 01:30:38.588643   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.588668   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:38.588677   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:38.588740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:38.621168   66919 cri.go:89] found id: ""
	I0815 01:30:38.621196   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.621204   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:38.621210   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:38.621258   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:38.654522   66919 cri.go:89] found id: ""
	I0815 01:30:38.654550   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.654559   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:38.654565   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:38.654631   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:38.688710   66919 cri.go:89] found id: ""
	I0815 01:30:38.688735   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.688743   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:38.688748   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:38.688802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:38.720199   66919 cri.go:89] found id: ""
	I0815 01:30:38.720224   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.720235   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:38.720242   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:38.720304   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:38.753996   66919 cri.go:89] found id: ""
	I0815 01:30:38.754026   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.754036   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:38.754043   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:38.754102   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:38.787488   66919 cri.go:89] found id: ""
	I0815 01:30:38.787514   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.787522   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:38.787530   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:38.787542   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:38.840062   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:38.840092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.854501   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:38.854543   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:38.933715   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:38.933749   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:38.933766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:39.010837   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:39.010871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:37.260918   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.263490   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.816265   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:38.313383   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:42.252613   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:44.751882   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:41.552027   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:41.566058   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:41.566136   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:41.603076   66919 cri.go:89] found id: ""
	I0815 01:30:41.603110   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.603123   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:41.603132   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:41.603201   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:41.637485   66919 cri.go:89] found id: ""
	I0815 01:30:41.637524   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.637536   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:41.637543   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:41.637609   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:41.671313   66919 cri.go:89] found id: ""
	I0815 01:30:41.671337   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.671345   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:41.671350   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:41.671399   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:41.704715   66919 cri.go:89] found id: ""
	I0815 01:30:41.704741   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.704752   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:41.704759   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:41.704821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:41.736357   66919 cri.go:89] found id: ""
	I0815 01:30:41.736388   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.736398   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:41.736405   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:41.736465   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:41.770373   66919 cri.go:89] found id: ""
	I0815 01:30:41.770401   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.770409   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:41.770415   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:41.770463   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:41.805965   66919 cri.go:89] found id: ""
	I0815 01:30:41.805990   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.805998   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:41.806003   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:41.806054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:41.841753   66919 cri.go:89] found id: ""
	I0815 01:30:41.841778   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.841786   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:41.841794   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:41.841805   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:41.914515   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:41.914539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:41.914557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:41.988345   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:41.988380   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:42.023814   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:42.023841   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:42.075210   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:42.075243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.589738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:44.602604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:44.602663   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:44.634203   66919 cri.go:89] found id: ""
	I0815 01:30:44.634236   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.634247   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:44.634254   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:44.634341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:44.683449   66919 cri.go:89] found id: ""
	I0815 01:30:44.683480   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.683490   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:44.683495   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:44.683563   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:44.716485   66919 cri.go:89] found id: ""
	I0815 01:30:44.716509   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.716520   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:44.716527   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:44.716595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:44.755708   66919 cri.go:89] found id: ""
	I0815 01:30:44.755737   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.755746   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:44.755755   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:44.755823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:44.791754   66919 cri.go:89] found id: ""
	I0815 01:30:44.791781   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.791790   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:44.791796   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:44.791867   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:44.825331   66919 cri.go:89] found id: ""
	I0815 01:30:44.825355   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.825363   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:44.825369   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:44.825416   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:44.861680   66919 cri.go:89] found id: ""
	I0815 01:30:44.861705   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.861713   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:44.861718   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:44.861770   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:44.898810   66919 cri.go:89] found id: ""
	I0815 01:30:44.898844   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.898857   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:44.898867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:44.898881   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:44.949416   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:44.949449   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.964230   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:44.964258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:45.038989   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:45.039012   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:45.039027   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:45.116311   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:45.116345   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:41.760941   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:43.764802   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:40.811825   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:42.813489   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:45.312497   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:46.753090   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:49.252535   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.658176   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:47.671312   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:47.671375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:47.705772   66919 cri.go:89] found id: ""
	I0815 01:30:47.705800   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.705812   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:47.705819   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:47.705882   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:47.737812   66919 cri.go:89] found id: ""
	I0815 01:30:47.737846   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.737857   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:47.737864   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:47.737928   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:47.773079   66919 cri.go:89] found id: ""
	I0815 01:30:47.773103   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.773114   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:47.773121   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:47.773184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:47.804941   66919 cri.go:89] found id: ""
	I0815 01:30:47.804970   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.804980   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:47.804990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:47.805053   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:47.841215   66919 cri.go:89] found id: ""
	I0815 01:30:47.841249   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.841260   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:47.841266   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:47.841322   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:47.872730   66919 cri.go:89] found id: ""
	I0815 01:30:47.872761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.872772   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:47.872780   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:47.872833   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:47.905731   66919 cri.go:89] found id: ""
	I0815 01:30:47.905761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.905769   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:47.905774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:47.905825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:47.939984   66919 cri.go:89] found id: ""
	I0815 01:30:47.940017   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.940028   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:47.940040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:47.940053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:47.989493   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:47.989526   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:48.002567   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:48.002605   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:48.066691   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:48.066709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:48.066720   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:48.142512   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:48.142551   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:46.260920   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:48.761706   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.813316   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:50.311266   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:51.253220   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.751360   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:50.681288   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:50.695289   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:50.695358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:50.729264   66919 cri.go:89] found id: ""
	I0815 01:30:50.729293   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.729303   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:50.729310   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:50.729374   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:50.765308   66919 cri.go:89] found id: ""
	I0815 01:30:50.765337   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.765348   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:50.765354   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:50.765421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:50.801332   66919 cri.go:89] found id: ""
	I0815 01:30:50.801362   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.801382   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:50.801391   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:50.801452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:50.834822   66919 cri.go:89] found id: ""
	I0815 01:30:50.834855   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.834866   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:50.834873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:50.834937   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:50.868758   66919 cri.go:89] found id: ""
	I0815 01:30:50.868785   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.868804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:50.868817   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:50.868886   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:50.902003   66919 cri.go:89] found id: ""
	I0815 01:30:50.902035   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.902046   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:50.902053   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:50.902113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:50.934517   66919 cri.go:89] found id: ""
	I0815 01:30:50.934546   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.934562   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:50.934569   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:50.934628   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:50.968195   66919 cri.go:89] found id: ""
	I0815 01:30:50.968224   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.968233   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:50.968244   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:50.968258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:51.019140   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:51.019176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:51.032046   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:51.032072   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:51.109532   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:51.109555   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:51.109571   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:51.186978   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:51.187021   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:53.734145   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:53.747075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:53.747146   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:53.779774   66919 cri.go:89] found id: ""
	I0815 01:30:53.779800   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.779807   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:53.779812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:53.779861   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:53.813079   66919 cri.go:89] found id: ""
	I0815 01:30:53.813119   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.813130   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:53.813137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:53.813198   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:53.847148   66919 cri.go:89] found id: ""
	I0815 01:30:53.847179   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.847188   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:53.847195   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:53.847261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:53.880562   66919 cri.go:89] found id: ""
	I0815 01:30:53.880589   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.880596   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:53.880604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:53.880666   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:53.913334   66919 cri.go:89] found id: ""
	I0815 01:30:53.913364   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.913372   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:53.913378   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:53.913436   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:53.946008   66919 cri.go:89] found id: ""
	I0815 01:30:53.946042   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.946052   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:53.946057   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:53.946111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:53.978557   66919 cri.go:89] found id: ""
	I0815 01:30:53.978586   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.978595   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:53.978600   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:53.978653   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:54.010358   66919 cri.go:89] found id: ""
	I0815 01:30:54.010385   66919 logs.go:276] 0 containers: []
	W0815 01:30:54.010392   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:54.010401   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:54.010413   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:54.059780   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:54.059815   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:54.073397   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:54.073428   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:54.140996   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:54.141024   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:54.141039   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:54.215401   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:54.215437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:51.261078   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.261318   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:52.315214   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:54.813501   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:55.751557   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.766434   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:56.756848   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:56.769371   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:56.769434   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:56.806021   66919 cri.go:89] found id: ""
	I0815 01:30:56.806046   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.806076   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:56.806100   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:56.806170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:56.855347   66919 cri.go:89] found id: ""
	I0815 01:30:56.855377   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.855393   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:56.855400   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:56.855464   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:56.898669   66919 cri.go:89] found id: ""
	I0815 01:30:56.898700   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.898710   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:56.898717   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:56.898785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:56.955078   66919 cri.go:89] found id: ""
	I0815 01:30:56.955112   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.955124   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:56.955131   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:56.955205   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:56.987638   66919 cri.go:89] found id: ""
	I0815 01:30:56.987666   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.987674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:56.987680   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:56.987729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:57.019073   66919 cri.go:89] found id: ""
	I0815 01:30:57.019101   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.019109   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:57.019114   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:57.019170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:57.051695   66919 cri.go:89] found id: ""
	I0815 01:30:57.051724   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.051735   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:57.051742   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:57.051804   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:57.085066   66919 cri.go:89] found id: ""
	I0815 01:30:57.085095   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.085106   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:57.085117   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:57.085131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:57.134043   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:57.134080   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:57.147838   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:57.147871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:57.221140   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:57.221174   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:57.221190   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:57.302571   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:57.302607   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:59.841296   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:59.854638   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:59.854700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:59.885940   66919 cri.go:89] found id: ""
	I0815 01:30:59.885963   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.885971   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:59.885976   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:59.886026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:59.918783   66919 cri.go:89] found id: ""
	I0815 01:30:59.918812   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.918824   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:59.918832   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:59.918905   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:59.952122   66919 cri.go:89] found id: ""
	I0815 01:30:59.952153   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.952163   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:59.952169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:59.952233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:59.987303   66919 cri.go:89] found id: ""
	I0815 01:30:59.987331   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.987339   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:59.987344   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:59.987410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:00.024606   66919 cri.go:89] found id: ""
	I0815 01:31:00.024640   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.024666   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:00.024677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:00.024738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:00.055993   66919 cri.go:89] found id: ""
	I0815 01:31:00.056020   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.056031   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:00.056039   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:00.056104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:00.087128   66919 cri.go:89] found id: ""
	I0815 01:31:00.087161   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.087173   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:00.087180   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:00.087249   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:00.120436   66919 cri.go:89] found id: ""
	I0815 01:31:00.120465   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.120476   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:00.120488   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:00.120503   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:55.261504   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.762139   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.312874   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:59.811724   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.252248   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.751908   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.133810   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:00.133838   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:00.199949   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:00.199971   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:00.199984   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:00.284740   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:00.284778   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.321791   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:00.321827   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:02.873253   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:02.885846   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:02.885925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:02.924698   66919 cri.go:89] found id: ""
	I0815 01:31:02.924727   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.924739   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:02.924745   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:02.924807   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:02.961352   66919 cri.go:89] found id: ""
	I0815 01:31:02.961383   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.961391   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:02.961396   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:02.961450   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:02.996293   66919 cri.go:89] found id: ""
	I0815 01:31:02.996327   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.996334   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:02.996341   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:02.996391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:03.028976   66919 cri.go:89] found id: ""
	I0815 01:31:03.029005   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.029013   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:03.029019   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:03.029066   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:03.063388   66919 cri.go:89] found id: ""
	I0815 01:31:03.063425   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.063436   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:03.063445   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:03.063518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:03.099730   66919 cri.go:89] found id: ""
	I0815 01:31:03.099757   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.099767   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:03.099778   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:03.099841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:03.132347   66919 cri.go:89] found id: ""
	I0815 01:31:03.132370   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.132380   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:03.132386   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:03.132495   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:03.165120   66919 cri.go:89] found id: ""
	I0815 01:31:03.165146   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.165153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:03.165161   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:03.165173   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:03.217544   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:03.217576   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:03.232299   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:03.232341   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:03.297458   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:03.297484   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:03.297500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:03.377304   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:03.377338   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.261621   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.760996   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:04.762492   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:01.814111   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:04.311963   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.251139   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:07.252081   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:09.253611   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.915544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:05.929154   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:05.929231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:05.972008   66919 cri.go:89] found id: ""
	I0815 01:31:05.972037   66919 logs.go:276] 0 containers: []
	W0815 01:31:05.972048   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:05.972055   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:05.972119   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:06.005459   66919 cri.go:89] found id: ""
	I0815 01:31:06.005486   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.005494   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:06.005499   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:06.005550   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:06.037623   66919 cri.go:89] found id: ""
	I0815 01:31:06.037655   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.037666   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:06.037674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:06.037733   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:06.070323   66919 cri.go:89] found id: ""
	I0815 01:31:06.070347   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.070356   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:06.070361   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:06.070419   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:06.103570   66919 cri.go:89] found id: ""
	I0815 01:31:06.103593   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.103601   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:06.103606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:06.103654   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:06.136253   66919 cri.go:89] found id: ""
	I0815 01:31:06.136281   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.136291   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:06.136297   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:06.136356   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:06.170851   66919 cri.go:89] found id: ""
	I0815 01:31:06.170878   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.170890   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:06.170895   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:06.170942   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:06.205836   66919 cri.go:89] found id: ""
	I0815 01:31:06.205860   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.205867   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:06.205876   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:06.205892   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:06.282838   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:06.282872   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:06.323867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:06.323898   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:06.378187   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:06.378230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:06.393126   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:06.393160   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:06.460898   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:08.961182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:08.973963   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:08.974048   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:09.007466   66919 cri.go:89] found id: ""
	I0815 01:31:09.007494   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.007502   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:09.007509   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:09.007567   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:09.045097   66919 cri.go:89] found id: ""
	I0815 01:31:09.045123   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.045131   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:09.045137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:09.045187   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:09.078326   66919 cri.go:89] found id: ""
	I0815 01:31:09.078356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.078380   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:09.078389   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:09.078455   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:09.109430   66919 cri.go:89] found id: ""
	I0815 01:31:09.109460   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.109471   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:09.109478   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:09.109544   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:09.143200   66919 cri.go:89] found id: ""
	I0815 01:31:09.143225   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.143234   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:09.143239   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:09.143306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:09.179057   66919 cri.go:89] found id: ""
	I0815 01:31:09.179081   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.179089   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:09.179095   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:09.179141   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:09.213327   66919 cri.go:89] found id: ""
	I0815 01:31:09.213356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.213368   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:09.213375   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:09.213425   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:09.246716   66919 cri.go:89] found id: ""
	I0815 01:31:09.246745   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.246756   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:09.246763   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:09.246775   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:09.299075   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:09.299105   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:09.313023   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:09.313054   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:09.377521   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:09.377545   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:09.377557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:09.453791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:09.453830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:07.260671   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:09.261005   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:06.313082   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:08.812290   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.753344   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.251251   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.991473   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:12.004615   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:12.004707   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:12.045028   66919 cri.go:89] found id: ""
	I0815 01:31:12.045057   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.045066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:12.045072   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:12.045121   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:12.077887   66919 cri.go:89] found id: ""
	I0815 01:31:12.077910   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.077920   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:12.077926   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:12.077974   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:12.110214   66919 cri.go:89] found id: ""
	I0815 01:31:12.110249   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.110260   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:12.110268   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:12.110328   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:12.142485   66919 cri.go:89] found id: ""
	I0815 01:31:12.142509   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.142516   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:12.142522   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:12.142572   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:12.176921   66919 cri.go:89] found id: ""
	I0815 01:31:12.176951   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.176962   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:12.176969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:12.177030   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:12.212093   66919 cri.go:89] found id: ""
	I0815 01:31:12.212142   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.212154   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:12.212162   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:12.212216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:12.246980   66919 cri.go:89] found id: ""
	I0815 01:31:12.247007   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.247017   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:12.247024   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:12.247082   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:12.280888   66919 cri.go:89] found id: ""
	I0815 01:31:12.280918   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.280931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:12.280943   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:12.280959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:12.333891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:12.333923   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:12.346753   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:12.346783   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:12.415652   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:12.415675   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:12.415692   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:12.494669   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:12.494706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.031185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:15.044605   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:15.044704   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:15.081810   66919 cri.go:89] found id: ""
	I0815 01:31:15.081846   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.081860   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:15.081869   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:15.081932   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:15.113517   66919 cri.go:89] found id: ""
	I0815 01:31:15.113550   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.113562   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:15.113568   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:15.113641   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:11.762158   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.260892   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.314672   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:13.811754   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:16.751293   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.752458   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.147638   66919 cri.go:89] found id: ""
	I0815 01:31:15.147665   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.147673   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:15.147679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:15.147746   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:15.178938   66919 cri.go:89] found id: ""
	I0815 01:31:15.178966   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.178976   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:15.178990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:15.179054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:15.212304   66919 cri.go:89] found id: ""
	I0815 01:31:15.212333   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.212346   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:15.212353   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:15.212414   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:15.245991   66919 cri.go:89] found id: ""
	I0815 01:31:15.246012   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.246019   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:15.246025   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:15.246074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:15.280985   66919 cri.go:89] found id: ""
	I0815 01:31:15.281016   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.281034   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:15.281041   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:15.281105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:15.315902   66919 cri.go:89] found id: ""
	I0815 01:31:15.315939   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.315948   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:15.315958   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:15.315973   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:15.329347   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:15.329375   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:15.400366   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:15.400388   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:15.400405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:15.479074   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:15.479118   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.516204   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:15.516230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.070588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:18.083120   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:18.083196   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:18.115673   66919 cri.go:89] found id: ""
	I0815 01:31:18.115701   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.115709   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:18.115715   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:18.115772   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:18.147011   66919 cri.go:89] found id: ""
	I0815 01:31:18.147039   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.147047   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:18.147053   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:18.147126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:18.179937   66919 cri.go:89] found id: ""
	I0815 01:31:18.179960   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.179968   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:18.179973   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:18.180032   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:18.214189   66919 cri.go:89] found id: ""
	I0815 01:31:18.214216   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.214224   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:18.214230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:18.214289   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:18.252102   66919 cri.go:89] found id: ""
	I0815 01:31:18.252130   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.252137   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:18.252143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:18.252204   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:18.285481   66919 cri.go:89] found id: ""
	I0815 01:31:18.285519   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.285529   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:18.285536   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:18.285599   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:18.321609   66919 cri.go:89] found id: ""
	I0815 01:31:18.321636   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.321651   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:18.321660   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:18.321723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:18.352738   66919 cri.go:89] found id: ""
	I0815 01:31:18.352766   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.352774   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:18.352782   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:18.352796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.401481   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:18.401517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:18.414984   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:18.415016   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:18.485539   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:18.485559   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:18.485579   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:18.569611   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:18.569651   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:16.262086   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.760590   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.812958   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:17.813230   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:20.312988   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.255232   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.751939   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.109609   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:21.123972   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:21.124038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:21.157591   66919 cri.go:89] found id: ""
	I0815 01:31:21.157624   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.157636   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:21.157643   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:21.157700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:21.192506   66919 cri.go:89] found id: ""
	I0815 01:31:21.192535   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.192545   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:21.192552   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:21.192623   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:21.224873   66919 cri.go:89] found id: ""
	I0815 01:31:21.224901   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.224912   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:21.224919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:21.224980   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:21.258398   66919 cri.go:89] found id: ""
	I0815 01:31:21.258427   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.258438   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:21.258446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:21.258513   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:21.295754   66919 cri.go:89] found id: ""
	I0815 01:31:21.295781   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.295792   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:21.295799   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:21.295870   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:21.330174   66919 cri.go:89] found id: ""
	I0815 01:31:21.330195   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.330202   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:21.330207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:21.330255   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:21.364565   66919 cri.go:89] found id: ""
	I0815 01:31:21.364588   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.364596   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:21.364639   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:21.364717   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:21.397889   66919 cri.go:89] found id: ""
	I0815 01:31:21.397920   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.397931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:21.397942   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:21.397961   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.471788   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:21.471822   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:21.508837   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:21.508867   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:21.560538   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:21.560575   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:21.575581   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:21.575622   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:21.647798   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.148566   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:24.160745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:24.160813   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:24.192535   66919 cri.go:89] found id: ""
	I0815 01:31:24.192558   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.192566   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:24.192572   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:24.192630   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:24.223468   66919 cri.go:89] found id: ""
	I0815 01:31:24.223499   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.223507   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:24.223513   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:24.223561   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:24.258905   66919 cri.go:89] found id: ""
	I0815 01:31:24.258931   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.258938   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:24.258944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:24.259006   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:24.298914   66919 cri.go:89] found id: ""
	I0815 01:31:24.298942   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.298949   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:24.298955   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:24.299011   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:24.331962   66919 cri.go:89] found id: ""
	I0815 01:31:24.331992   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.332003   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:24.332011   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:24.332078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:24.365984   66919 cri.go:89] found id: ""
	I0815 01:31:24.366014   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.366022   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:24.366028   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:24.366078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:24.402397   66919 cri.go:89] found id: ""
	I0815 01:31:24.402432   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.402442   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:24.402450   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:24.402516   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:24.434662   66919 cri.go:89] found id: ""
	I0815 01:31:24.434691   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.434704   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:24.434714   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:24.434730   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:24.474087   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:24.474117   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:24.524494   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:24.524533   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:24.537770   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:24.537795   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:24.608594   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.608634   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:24.608650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.260845   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.260974   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:22.811939   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:24.812873   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:26.252688   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.751413   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.191588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:27.206339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:27.206421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:27.241277   66919 cri.go:89] found id: ""
	I0815 01:31:27.241306   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.241315   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:27.241321   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:27.241385   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:27.275952   66919 cri.go:89] found id: ""
	I0815 01:31:27.275983   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.275992   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:27.275998   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:27.276060   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:27.308320   66919 cri.go:89] found id: ""
	I0815 01:31:27.308348   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.308359   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:27.308366   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:27.308424   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:27.340957   66919 cri.go:89] found id: ""
	I0815 01:31:27.340987   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.340998   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:27.341007   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:27.341135   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:27.373078   66919 cri.go:89] found id: ""
	I0815 01:31:27.373102   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.373110   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:27.373117   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:27.373182   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:27.409250   66919 cri.go:89] found id: ""
	I0815 01:31:27.409277   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.409289   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:27.409296   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:27.409358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:27.444244   66919 cri.go:89] found id: ""
	I0815 01:31:27.444270   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.444280   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:27.444287   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:27.444360   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:27.482507   66919 cri.go:89] found id: ""
	I0815 01:31:27.482535   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.482543   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:27.482552   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:27.482570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:27.521896   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:27.521931   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:27.575404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:27.575437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:27.587713   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:27.587745   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:27.650431   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:27.650461   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:27.650475   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:25.761255   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.261210   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.312866   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:29.812673   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:30.752414   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:33.252178   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:30.228663   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:30.242782   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:30.242852   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:30.278385   66919 cri.go:89] found id: ""
	I0815 01:31:30.278410   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.278420   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:30.278428   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:30.278483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:30.316234   66919 cri.go:89] found id: ""
	I0815 01:31:30.316258   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.316268   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:30.316276   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:30.316335   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:30.348738   66919 cri.go:89] found id: ""
	I0815 01:31:30.348767   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.348778   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:30.348787   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:30.348851   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:30.380159   66919 cri.go:89] found id: ""
	I0815 01:31:30.380189   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.380201   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:30.380208   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:30.380261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:30.414888   66919 cri.go:89] found id: ""
	I0815 01:31:30.414911   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.414919   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:30.414924   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:30.414977   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:30.447898   66919 cri.go:89] found id: ""
	I0815 01:31:30.447923   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.447931   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:30.447937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:30.448024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:30.479148   66919 cri.go:89] found id: ""
	I0815 01:31:30.479177   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.479187   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:30.479193   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:30.479245   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:30.511725   66919 cri.go:89] found id: ""
	I0815 01:31:30.511752   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.511760   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:30.511768   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:30.511780   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:30.562554   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:30.562590   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:30.575869   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:30.575896   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:30.642642   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:30.642662   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:30.642675   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:30.734491   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:30.734530   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:33.276918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:33.289942   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:33.290010   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:33.322770   66919 cri.go:89] found id: ""
	I0815 01:31:33.322799   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.322806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:33.322813   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:33.322862   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:33.359474   66919 cri.go:89] found id: ""
	I0815 01:31:33.359503   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.359513   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:33.359520   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:33.359590   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:33.391968   66919 cri.go:89] found id: ""
	I0815 01:31:33.391996   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.392007   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:33.392014   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:33.392076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:33.423830   66919 cri.go:89] found id: ""
	I0815 01:31:33.423853   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.423861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:33.423866   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:33.423914   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:33.454991   66919 cri.go:89] found id: ""
	I0815 01:31:33.455014   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.455022   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:33.455027   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:33.455076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:33.492150   66919 cri.go:89] found id: ""
	I0815 01:31:33.492173   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.492181   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:33.492187   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:33.492236   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:33.525206   66919 cri.go:89] found id: ""
	I0815 01:31:33.525237   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.525248   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:33.525255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:33.525331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:33.558939   66919 cri.go:89] found id: ""
	I0815 01:31:33.558973   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.558984   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:33.558995   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:33.559011   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:33.616977   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:33.617029   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:33.629850   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:33.629879   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:33.698029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:33.698052   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:33.698069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:33.776609   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:33.776641   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:30.261492   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.761417   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.761672   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.315096   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.811837   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:35.751307   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:37.753280   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.320299   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:36.333429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:36.333492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:36.366810   66919 cri.go:89] found id: ""
	I0815 01:31:36.366846   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.366858   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:36.366866   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:36.366918   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:36.405898   66919 cri.go:89] found id: ""
	I0815 01:31:36.405930   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.405942   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:36.405949   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:36.406017   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:36.471396   66919 cri.go:89] found id: ""
	I0815 01:31:36.471432   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.471445   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:36.471453   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:36.471524   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:36.504319   66919 cri.go:89] found id: ""
	I0815 01:31:36.504355   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.504367   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:36.504373   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:36.504430   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:36.542395   66919 cri.go:89] found id: ""
	I0815 01:31:36.542423   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.542431   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:36.542437   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:36.542492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:36.576279   66919 cri.go:89] found id: ""
	I0815 01:31:36.576310   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.576320   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:36.576327   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:36.576391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:36.609215   66919 cri.go:89] found id: ""
	I0815 01:31:36.609243   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.609251   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:36.609256   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:36.609306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:36.641911   66919 cri.go:89] found id: ""
	I0815 01:31:36.641936   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.641944   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:36.641952   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:36.641964   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:36.691751   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:36.691784   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:36.704619   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:36.704644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:36.768328   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:36.768348   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:36.768360   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:36.843727   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:36.843759   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:39.381851   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:39.396205   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:39.396284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:39.430646   66919 cri.go:89] found id: ""
	I0815 01:31:39.430673   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.430681   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:39.430688   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:39.430751   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:39.468470   66919 cri.go:89] found id: ""
	I0815 01:31:39.468504   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.468517   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:39.468526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:39.468603   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:39.500377   66919 cri.go:89] found id: ""
	I0815 01:31:39.500407   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.500416   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:39.500423   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:39.500490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:39.532411   66919 cri.go:89] found id: ""
	I0815 01:31:39.532440   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.532447   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:39.532452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:39.532504   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:39.564437   66919 cri.go:89] found id: ""
	I0815 01:31:39.564463   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.564471   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:39.564476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:39.564528   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:39.598732   66919 cri.go:89] found id: ""
	I0815 01:31:39.598757   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.598765   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:39.598771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:39.598837   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:39.640429   66919 cri.go:89] found id: ""
	I0815 01:31:39.640457   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.640469   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:39.640476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:39.640536   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:39.672116   66919 cri.go:89] found id: ""
	I0815 01:31:39.672142   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.672151   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:39.672159   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:39.672171   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:39.721133   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:39.721170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:39.734024   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:39.734060   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:39.799465   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:39.799487   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:39.799501   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:39.880033   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:39.880068   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:37.263319   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:39.762708   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.812954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:39.312718   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:40.251411   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.252627   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.750964   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.421276   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:42.438699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:42.438760   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:42.473213   66919 cri.go:89] found id: ""
	I0815 01:31:42.473239   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.473246   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:42.473251   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:42.473311   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:42.509493   66919 cri.go:89] found id: ""
	I0815 01:31:42.509523   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.509533   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:42.509538   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:42.509594   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:42.543625   66919 cri.go:89] found id: ""
	I0815 01:31:42.543649   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.543659   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:42.543665   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:42.543731   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:42.581756   66919 cri.go:89] found id: ""
	I0815 01:31:42.581784   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.581794   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:42.581801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:42.581865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:42.615595   66919 cri.go:89] found id: ""
	I0815 01:31:42.615618   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.615626   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:42.615631   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:42.615689   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:42.652938   66919 cri.go:89] found id: ""
	I0815 01:31:42.652961   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.652973   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:42.652979   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:42.653026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:42.689362   66919 cri.go:89] found id: ""
	I0815 01:31:42.689391   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.689399   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:42.689406   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:42.689460   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:42.725880   66919 cri.go:89] found id: ""
	I0815 01:31:42.725903   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.725911   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:42.725920   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:42.725932   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:42.798531   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:42.798553   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:42.798567   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:42.878583   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:42.878617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:42.916218   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:42.916245   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:42.968613   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:42.968650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:42.260936   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.262272   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:41.315219   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:43.812950   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:46.751554   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.752369   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.482622   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:45.494847   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:45.494917   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:45.526105   66919 cri.go:89] found id: ""
	I0815 01:31:45.526130   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.526139   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:45.526145   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:45.526195   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:45.558218   66919 cri.go:89] found id: ""
	I0815 01:31:45.558247   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.558258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:45.558265   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:45.558327   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:45.589922   66919 cri.go:89] found id: ""
	I0815 01:31:45.589950   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.589961   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:45.589969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:45.590037   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:45.622639   66919 cri.go:89] found id: ""
	I0815 01:31:45.622670   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.622685   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:45.622690   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:45.622740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:45.659274   66919 cri.go:89] found id: ""
	I0815 01:31:45.659301   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.659309   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:45.659314   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:45.659362   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:45.690768   66919 cri.go:89] found id: ""
	I0815 01:31:45.690795   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.690804   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:45.690810   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:45.690860   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:45.726862   66919 cri.go:89] found id: ""
	I0815 01:31:45.726885   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.726892   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:45.726898   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:45.726943   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:45.761115   66919 cri.go:89] found id: ""
	I0815 01:31:45.761142   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.761153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:45.761164   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:45.761179   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:45.774290   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:45.774335   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:45.843029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:45.843053   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:45.843069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:45.918993   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:45.919032   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:45.955647   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:45.955685   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.506376   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:48.518173   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:48.518234   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:48.550773   66919 cri.go:89] found id: ""
	I0815 01:31:48.550798   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.550806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:48.550812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:48.550865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:48.582398   66919 cri.go:89] found id: ""
	I0815 01:31:48.582431   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.582442   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:48.582449   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:48.582512   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:48.613989   66919 cri.go:89] found id: ""
	I0815 01:31:48.614023   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.614036   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:48.614045   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:48.614114   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:48.645269   66919 cri.go:89] found id: ""
	I0815 01:31:48.645306   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.645317   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:48.645326   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:48.645394   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:48.680588   66919 cri.go:89] found id: ""
	I0815 01:31:48.680615   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.680627   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:48.680636   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:48.680723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:48.719580   66919 cri.go:89] found id: ""
	I0815 01:31:48.719607   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.719615   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:48.719621   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:48.719684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:48.756573   66919 cri.go:89] found id: ""
	I0815 01:31:48.756597   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.756606   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:48.756613   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:48.756684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:48.793983   66919 cri.go:89] found id: ""
	I0815 01:31:48.794018   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.794029   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:48.794040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:48.794053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.847776   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:48.847811   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:48.870731   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:48.870762   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:48.960519   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:48.960548   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:48.960565   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:49.037502   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:49.037535   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:46.761461   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.761907   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.813203   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.313262   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:51.251455   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:53.252808   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:51.576022   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:51.589531   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:51.589595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:51.623964   66919 cri.go:89] found id: ""
	I0815 01:31:51.623991   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.624000   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:51.624008   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:51.624074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:51.657595   66919 cri.go:89] found id: ""
	I0815 01:31:51.657618   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.657626   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:51.657632   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:51.657681   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:51.692462   66919 cri.go:89] found id: ""
	I0815 01:31:51.692490   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.692501   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:51.692507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:51.692570   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:51.724210   66919 cri.go:89] found id: ""
	I0815 01:31:51.724249   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.724259   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:51.724267   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:51.724329   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:51.756450   66919 cri.go:89] found id: ""
	I0815 01:31:51.756476   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.756486   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:51.756493   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:51.756555   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:51.789082   66919 cri.go:89] found id: ""
	I0815 01:31:51.789114   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.789126   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:51.789133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:51.789183   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:51.822390   66919 cri.go:89] found id: ""
	I0815 01:31:51.822420   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.822431   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:51.822438   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:51.822491   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:51.855977   66919 cri.go:89] found id: ""
	I0815 01:31:51.856004   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.856014   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:51.856025   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:51.856040   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:51.904470   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:51.904500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:51.918437   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:51.918466   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:51.991742   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:51.991770   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:51.991785   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:52.065894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:52.065926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:54.602000   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:54.616388   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:54.616466   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:54.675750   66919 cri.go:89] found id: ""
	I0815 01:31:54.675779   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.675793   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:54.675802   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:54.675857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:54.710581   66919 cri.go:89] found id: ""
	I0815 01:31:54.710609   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.710620   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:54.710627   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:54.710691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:54.747267   66919 cri.go:89] found id: ""
	I0815 01:31:54.747304   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.747316   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:54.747325   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:54.747387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:54.784175   66919 cri.go:89] found id: ""
	I0815 01:31:54.784209   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.784221   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:54.784230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:54.784295   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:54.820360   66919 cri.go:89] found id: ""
	I0815 01:31:54.820395   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.820405   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:54.820412   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:54.820480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:54.853176   66919 cri.go:89] found id: ""
	I0815 01:31:54.853204   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.853214   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:54.853222   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:54.853281   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:54.886063   66919 cri.go:89] found id: ""
	I0815 01:31:54.886092   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.886105   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:54.886112   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:54.886171   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:54.919495   66919 cri.go:89] found id: ""
	I0815 01:31:54.919529   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.919540   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:54.919558   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:54.919574   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:54.973177   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:54.973213   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:54.986864   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:54.986899   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:55.052637   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:55.052685   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:55.052700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:51.260314   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:53.261883   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:50.812208   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:52.812356   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:54.812990   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:55.750709   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.751319   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.752400   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:55.133149   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:55.133180   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:57.672833   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:57.686035   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:57.686099   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:57.718612   66919 cri.go:89] found id: ""
	I0815 01:31:57.718641   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.718653   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:57.718661   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:57.718738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:57.752763   66919 cri.go:89] found id: ""
	I0815 01:31:57.752781   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.752788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:57.752793   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:57.752840   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:57.785667   66919 cri.go:89] found id: ""
	I0815 01:31:57.785697   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.785709   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:57.785716   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:57.785776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:57.818775   66919 cri.go:89] found id: ""
	I0815 01:31:57.818804   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.818813   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:57.818821   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:57.818881   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:57.853766   66919 cri.go:89] found id: ""
	I0815 01:31:57.853798   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.853809   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:57.853815   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:57.853880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:57.886354   66919 cri.go:89] found id: ""
	I0815 01:31:57.886379   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.886386   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:57.886392   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:57.886453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:57.920522   66919 cri.go:89] found id: ""
	I0815 01:31:57.920553   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.920576   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:57.920583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:57.920648   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:57.952487   66919 cri.go:89] found id: ""
	I0815 01:31:57.952511   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.952520   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:57.952528   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:57.952541   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:58.003026   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:58.003064   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:58.016516   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:58.016544   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:58.091434   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:58.091459   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:58.091500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:58.170038   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:58.170073   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:55.760430   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.760719   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.761206   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.313073   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.812268   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:02.252033   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:04.252260   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:00.709797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:00.724086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:00.724162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:00.756025   66919 cri.go:89] found id: ""
	I0815 01:32:00.756056   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.756066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:00.756073   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:00.756130   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:00.787831   66919 cri.go:89] found id: ""
	I0815 01:32:00.787858   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.787870   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:00.787880   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:00.787940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:00.821605   66919 cri.go:89] found id: ""
	I0815 01:32:00.821637   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.821644   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:00.821649   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:00.821697   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:00.852708   66919 cri.go:89] found id: ""
	I0815 01:32:00.852732   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.852739   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:00.852745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:00.852790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:00.885392   66919 cri.go:89] found id: ""
	I0815 01:32:00.885426   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.885437   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:00.885446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:00.885506   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:00.916715   66919 cri.go:89] found id: ""
	I0815 01:32:00.916751   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.916763   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:00.916771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:00.916890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:00.949028   66919 cri.go:89] found id: ""
	I0815 01:32:00.949058   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.949069   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:00.949076   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:00.949137   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:00.986364   66919 cri.go:89] found id: ""
	I0815 01:32:00.986399   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.986409   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:00.986419   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:00.986433   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.036475   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:01.036517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:01.049711   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:01.049746   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:01.117283   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:01.117310   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:01.117328   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:01.195453   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:01.195492   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:03.732372   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:03.745944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:03.746005   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:03.780527   66919 cri.go:89] found id: ""
	I0815 01:32:03.780566   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.780578   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:03.780586   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:03.780647   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:03.814147   66919 cri.go:89] found id: ""
	I0815 01:32:03.814170   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.814177   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:03.814184   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:03.814267   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:03.847375   66919 cri.go:89] found id: ""
	I0815 01:32:03.847409   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.847422   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:03.847429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:03.847497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:03.882859   66919 cri.go:89] found id: ""
	I0815 01:32:03.882887   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.882897   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:03.882904   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:03.882972   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:03.916490   66919 cri.go:89] found id: ""
	I0815 01:32:03.916520   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.916528   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:03.916544   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:03.916613   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:03.954789   66919 cri.go:89] found id: ""
	I0815 01:32:03.954819   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.954836   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:03.954844   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:03.954907   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:03.987723   66919 cri.go:89] found id: ""
	I0815 01:32:03.987748   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.987756   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:03.987761   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:03.987810   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:04.020948   66919 cri.go:89] found id: ""
	I0815 01:32:04.020974   66919 logs.go:276] 0 containers: []
	W0815 01:32:04.020981   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:04.020990   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:04.021008   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:04.033466   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:04.033489   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:04.097962   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:04.097989   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:04.098006   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:04.174672   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:04.174706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:04.216198   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:04.216228   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.761354   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:03.762268   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:02.313003   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:04.812280   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.751582   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:08.752387   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.768102   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:06.782370   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:06.782473   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:06.815958   66919 cri.go:89] found id: ""
	I0815 01:32:06.815983   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.815992   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:06.815999   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:06.816059   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:06.848701   66919 cri.go:89] found id: ""
	I0815 01:32:06.848735   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.848748   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:06.848756   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:06.848821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:06.879506   66919 cri.go:89] found id: ""
	I0815 01:32:06.879536   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.879544   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:06.879550   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:06.879607   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:06.915332   66919 cri.go:89] found id: ""
	I0815 01:32:06.915359   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.915371   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:06.915377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:06.915438   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:06.949424   66919 cri.go:89] found id: ""
	I0815 01:32:06.949454   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.949464   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:06.949471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:06.949518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:06.983713   66919 cri.go:89] found id: ""
	I0815 01:32:06.983739   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.983747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:06.983753   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:06.983816   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:07.016165   66919 cri.go:89] found id: ""
	I0815 01:32:07.016196   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.016207   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:07.016214   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:07.016271   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:07.048368   66919 cri.go:89] found id: ""
	I0815 01:32:07.048399   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.048410   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:07.048420   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:07.048435   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:07.100088   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:07.100128   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:07.113430   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:07.113459   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:07.178199   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:07.178223   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:07.178239   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:07.265089   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:07.265121   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:09.804733   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:09.819456   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:09.819530   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:09.850946   66919 cri.go:89] found id: ""
	I0815 01:32:09.850974   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.850981   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:09.850986   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:09.851043   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:09.888997   66919 cri.go:89] found id: ""
	I0815 01:32:09.889028   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.889039   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:09.889045   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:09.889105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:09.921455   66919 cri.go:89] found id: ""
	I0815 01:32:09.921490   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.921503   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:09.921511   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:09.921587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:09.957365   66919 cri.go:89] found id: ""
	I0815 01:32:09.957394   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.957410   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:09.957417   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:09.957477   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:09.988716   66919 cri.go:89] found id: ""
	I0815 01:32:09.988740   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.988753   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:09.988760   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:09.988823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:10.024121   66919 cri.go:89] found id: ""
	I0815 01:32:10.024148   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.024155   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:10.024160   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:10.024208   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:10.056210   66919 cri.go:89] found id: ""
	I0815 01:32:10.056237   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.056247   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:10.056253   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:10.056314   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:10.087519   66919 cri.go:89] found id: ""
	I0815 01:32:10.087551   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.087562   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:10.087574   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:10.087589   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:06.260821   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:08.760889   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.813185   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:09.312608   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:11.251168   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.252911   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:10.142406   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:10.142446   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:10.156134   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:10.156176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:10.230397   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:10.230419   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:10.230432   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:10.315187   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:10.315221   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:12.852055   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:12.864410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:12.864479   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:12.895777   66919 cri.go:89] found id: ""
	I0815 01:32:12.895811   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.895821   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:12.895831   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:12.895902   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:12.928135   66919 cri.go:89] found id: ""
	I0815 01:32:12.928161   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.928171   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:12.928178   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:12.928244   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:12.961837   66919 cri.go:89] found id: ""
	I0815 01:32:12.961867   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.961878   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:12.961885   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:12.961947   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:12.997899   66919 cri.go:89] found id: ""
	I0815 01:32:12.997928   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.997939   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:12.997946   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:12.998008   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:13.032686   66919 cri.go:89] found id: ""
	I0815 01:32:13.032716   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.032725   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:13.032730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:13.032783   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:13.064395   66919 cri.go:89] found id: ""
	I0815 01:32:13.064431   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.064444   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:13.064452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:13.064522   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:13.103618   66919 cri.go:89] found id: ""
	I0815 01:32:13.103646   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.103655   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:13.103661   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:13.103711   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:13.137650   66919 cri.go:89] found id: ""
	I0815 01:32:13.137684   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.137694   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:13.137702   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:13.137715   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:13.189803   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:13.189836   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:13.204059   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:13.204091   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:13.273702   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:13.273723   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:13.273735   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:13.358979   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:13.359037   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:11.260422   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.260760   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:11.812182   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.812777   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:15.752291   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:17.752500   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:15.899388   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:15.911944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:15.912013   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:15.946179   66919 cri.go:89] found id: ""
	I0815 01:32:15.946206   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.946215   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:15.946223   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:15.946284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:15.979700   66919 cri.go:89] found id: ""
	I0815 01:32:15.979725   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.979732   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:15.979738   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:15.979784   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:16.013003   66919 cri.go:89] found id: ""
	I0815 01:32:16.013033   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.013044   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:16.013056   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:16.013113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:16.044824   66919 cri.go:89] found id: ""
	I0815 01:32:16.044851   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.044861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:16.044868   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:16.044930   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:16.076193   66919 cri.go:89] found id: ""
	I0815 01:32:16.076219   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.076227   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:16.076232   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:16.076280   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:16.113747   66919 cri.go:89] found id: ""
	I0815 01:32:16.113775   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.113785   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:16.113795   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:16.113855   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:16.145504   66919 cri.go:89] found id: ""
	I0815 01:32:16.145547   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.145560   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:16.145568   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:16.145637   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:16.181581   66919 cri.go:89] found id: ""
	I0815 01:32:16.181613   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.181623   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:16.181634   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:16.181655   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:16.223644   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:16.223687   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:16.279096   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:16.279131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:16.292132   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:16.292161   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:16.360605   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:16.360624   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:16.360636   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:18.938884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:18.951884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:18.951966   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:18.989163   66919 cri.go:89] found id: ""
	I0815 01:32:18.989192   66919 logs.go:276] 0 containers: []
	W0815 01:32:18.989201   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:18.989206   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:18.989256   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:19.025915   66919 cri.go:89] found id: ""
	I0815 01:32:19.025943   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.025952   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:19.025960   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:19.026028   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:19.062863   66919 cri.go:89] found id: ""
	I0815 01:32:19.062889   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.062899   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:19.062907   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:19.062969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:19.099336   66919 cri.go:89] found id: ""
	I0815 01:32:19.099358   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.099369   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:19.099383   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:19.099442   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:19.130944   66919 cri.go:89] found id: ""
	I0815 01:32:19.130977   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.130988   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:19.130995   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:19.131056   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:19.161353   66919 cri.go:89] found id: ""
	I0815 01:32:19.161381   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.161391   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:19.161398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:19.161454   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:19.195867   66919 cri.go:89] found id: ""
	I0815 01:32:19.195902   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.195915   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:19.195923   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:19.195993   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:19.228851   66919 cri.go:89] found id: ""
	I0815 01:32:19.228886   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.228899   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:19.228919   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:19.228938   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:19.281284   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:19.281320   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:19.294742   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:19.294771   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:19.364684   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:19.364708   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:19.364722   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:19.451057   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:19.451092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:15.261508   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:17.261956   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:19.760608   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:16.312855   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:18.811382   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:20.251898   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:22.252179   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:24.252312   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:21.989302   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:22.002691   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:22.002755   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:22.037079   66919 cri.go:89] found id: ""
	I0815 01:32:22.037101   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.037109   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:22.037115   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:22.037162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:22.069804   66919 cri.go:89] found id: ""
	I0815 01:32:22.069833   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.069842   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:22.069848   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:22.069919   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.102474   66919 cri.go:89] found id: ""
	I0815 01:32:22.102503   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.102515   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:22.102523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:22.102587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:22.137416   66919 cri.go:89] found id: ""
	I0815 01:32:22.137442   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.137449   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:22.137454   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:22.137511   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:22.171153   66919 cri.go:89] found id: ""
	I0815 01:32:22.171182   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.171191   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:22.171198   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:22.171259   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:22.207991   66919 cri.go:89] found id: ""
	I0815 01:32:22.208020   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.208029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:22.208038   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:22.208111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:22.245727   66919 cri.go:89] found id: ""
	I0815 01:32:22.245757   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.245767   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:22.245774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:22.245838   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:22.284478   66919 cri.go:89] found id: ""
	I0815 01:32:22.284502   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.284510   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:22.284518   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:22.284529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:22.297334   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:22.297378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:22.369318   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:22.369342   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:22.369356   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:22.445189   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:22.445226   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:22.486563   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:22.486592   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.037875   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:25.051503   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:25.051580   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:25.090579   66919 cri.go:89] found id: ""
	I0815 01:32:25.090610   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.090622   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:25.090629   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:25.090691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:25.123683   66919 cri.go:89] found id: ""
	I0815 01:32:25.123711   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.123722   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:25.123729   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:25.123790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.261478   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:24.760607   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:20.812971   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:23.311523   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:25.313928   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:26.752024   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.252947   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:25.155715   66919 cri.go:89] found id: ""
	I0815 01:32:25.155744   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.155752   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:25.155757   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:25.155806   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:25.186654   66919 cri.go:89] found id: ""
	I0815 01:32:25.186680   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.186688   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:25.186694   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:25.186741   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:25.218636   66919 cri.go:89] found id: ""
	I0815 01:32:25.218665   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.218674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:25.218679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:25.218729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:25.250018   66919 cri.go:89] found id: ""
	I0815 01:32:25.250046   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.250116   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:25.250147   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:25.250219   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:25.283374   66919 cri.go:89] found id: ""
	I0815 01:32:25.283403   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.283413   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:25.283420   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:25.283483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:25.315240   66919 cri.go:89] found id: ""
	I0815 01:32:25.315260   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.315267   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:25.315274   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:25.315286   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.367212   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:25.367243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:25.380506   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:25.380531   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:25.441106   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:25.441129   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:25.441145   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:25.522791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:25.522828   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.061984   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:28.075091   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:28.075149   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:28.110375   66919 cri.go:89] found id: ""
	I0815 01:32:28.110407   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.110419   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:28.110426   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:28.110490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:28.146220   66919 cri.go:89] found id: ""
	I0815 01:32:28.146249   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.146258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:28.146264   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:28.146317   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:28.177659   66919 cri.go:89] found id: ""
	I0815 01:32:28.177691   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.177702   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:28.177708   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:28.177776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:28.209729   66919 cri.go:89] found id: ""
	I0815 01:32:28.209759   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.209768   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:28.209775   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:28.209835   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:28.241605   66919 cri.go:89] found id: ""
	I0815 01:32:28.241633   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.241642   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:28.241646   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:28.241706   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:28.276697   66919 cri.go:89] found id: ""
	I0815 01:32:28.276722   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.276730   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:28.276735   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:28.276785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:28.309109   66919 cri.go:89] found id: ""
	I0815 01:32:28.309134   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.309144   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:28.309151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:28.309213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:28.348262   66919 cri.go:89] found id: ""
	I0815 01:32:28.348289   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.348303   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:28.348315   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:28.348329   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.387270   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:28.387296   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:28.440454   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:28.440504   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:28.453203   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:28.453233   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:28.523080   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:28.523106   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:28.523123   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:26.761742   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.261323   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:27.812457   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.812954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:31.253078   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:33.755301   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:31.098144   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:31.111396   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:31.111469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:31.143940   66919 cri.go:89] found id: ""
	I0815 01:32:31.143969   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.143977   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:31.143983   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:31.144038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:31.175393   66919 cri.go:89] found id: ""
	I0815 01:32:31.175421   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.175439   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:31.175447   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:31.175509   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:31.213955   66919 cri.go:89] found id: ""
	I0815 01:32:31.213984   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.213993   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:31.213998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:31.214047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:31.245836   66919 cri.go:89] found id: ""
	I0815 01:32:31.245861   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.245868   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:31.245873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:31.245936   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:31.279290   66919 cri.go:89] found id: ""
	I0815 01:32:31.279317   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.279327   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:31.279334   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:31.279408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:31.313898   66919 cri.go:89] found id: ""
	I0815 01:32:31.313926   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.313937   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:31.313944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:31.314020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:31.344466   66919 cri.go:89] found id: ""
	I0815 01:32:31.344502   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.344513   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:31.344521   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:31.344586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:31.375680   66919 cri.go:89] found id: ""
	I0815 01:32:31.375709   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.375721   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:31.375732   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:31.375747   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:31.457005   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:31.457048   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.494656   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:31.494691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:31.546059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:31.546096   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:31.559523   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:31.559553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:31.628402   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.128980   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:34.142151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:34.142216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:34.189425   66919 cri.go:89] found id: ""
	I0815 01:32:34.189453   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.189464   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:34.189470   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:34.189533   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:34.222360   66919 cri.go:89] found id: ""
	I0815 01:32:34.222385   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.222392   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:34.222398   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:34.222453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:34.256275   66919 cri.go:89] found id: ""
	I0815 01:32:34.256302   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.256314   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:34.256322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:34.256387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:34.294104   66919 cri.go:89] found id: ""
	I0815 01:32:34.294130   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.294137   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:34.294143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:34.294214   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:34.330163   66919 cri.go:89] found id: ""
	I0815 01:32:34.330193   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.330205   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:34.330213   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:34.330278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:34.363436   66919 cri.go:89] found id: ""
	I0815 01:32:34.363464   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.363475   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:34.363483   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:34.363540   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:34.399733   66919 cri.go:89] found id: ""
	I0815 01:32:34.399761   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.399772   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:34.399779   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:34.399832   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:34.433574   66919 cri.go:89] found id: ""
	I0815 01:32:34.433781   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.433804   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:34.433820   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:34.433839   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:34.488449   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:34.488496   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:34.502743   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:34.502776   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:34.565666   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.565701   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:34.565718   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:34.639463   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:34.639498   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.262299   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:33.760758   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:32.313372   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:34.812259   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:36.251156   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:38.252330   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:37.189617   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:37.202695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:37.202766   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:37.235556   66919 cri.go:89] found id: ""
	I0815 01:32:37.235589   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.235600   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:37.235608   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:37.235669   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:37.271110   66919 cri.go:89] found id: ""
	I0815 01:32:37.271139   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.271150   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:37.271158   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:37.271216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:37.304294   66919 cri.go:89] found id: ""
	I0815 01:32:37.304325   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.304332   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:37.304337   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:37.304398   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:37.337271   66919 cri.go:89] found id: ""
	I0815 01:32:37.337297   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.337309   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:37.337317   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:37.337377   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:37.373088   66919 cri.go:89] found id: ""
	I0815 01:32:37.373115   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.373126   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:37.373133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:37.373184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:37.407978   66919 cri.go:89] found id: ""
	I0815 01:32:37.408003   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.408011   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:37.408016   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:37.408065   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:37.441966   66919 cri.go:89] found id: ""
	I0815 01:32:37.441999   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.442009   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:37.442017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:37.442079   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:37.473670   66919 cri.go:89] found id: ""
	I0815 01:32:37.473699   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.473710   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:37.473720   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:37.473740   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:37.509174   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:37.509208   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:37.560059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:37.560099   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:37.574425   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:37.574458   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:37.639177   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:37.639199   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:37.639216   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:36.260796   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:38.261082   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:36.813759   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:39.312862   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:40.752526   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:43.251946   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:40.218504   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:40.231523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:40.231626   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:40.266065   66919 cri.go:89] found id: ""
	I0815 01:32:40.266092   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.266102   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:40.266109   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:40.266174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:40.298717   66919 cri.go:89] found id: ""
	I0815 01:32:40.298749   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.298759   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:40.298767   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:40.298821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:40.330633   66919 cri.go:89] found id: ""
	I0815 01:32:40.330660   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.330668   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:40.330674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:40.330738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:40.367840   66919 cri.go:89] found id: ""
	I0815 01:32:40.367866   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.367876   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:40.367884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:40.367953   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:40.403883   66919 cri.go:89] found id: ""
	I0815 01:32:40.403910   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.403921   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:40.403927   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:40.404001   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:40.433989   66919 cri.go:89] found id: ""
	I0815 01:32:40.434016   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.434029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:40.434036   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:40.434098   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:40.468173   66919 cri.go:89] found id: ""
	I0815 01:32:40.468202   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.468213   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:40.468220   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:40.468278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:40.502701   66919 cri.go:89] found id: ""
	I0815 01:32:40.502726   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.502737   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:40.502748   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:40.502772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:40.582716   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:40.582751   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:40.582766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:40.663875   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:40.663910   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.710394   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:40.710439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:40.763015   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:40.763044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.276542   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:43.289311   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:43.289375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:43.334368   66919 cri.go:89] found id: ""
	I0815 01:32:43.334398   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.334408   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:43.334416   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:43.334480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:43.367778   66919 cri.go:89] found id: ""
	I0815 01:32:43.367810   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.367821   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:43.367829   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:43.367890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:43.408036   66919 cri.go:89] found id: ""
	I0815 01:32:43.408060   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.408067   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:43.408072   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:43.408126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:43.442240   66919 cri.go:89] found id: ""
	I0815 01:32:43.442264   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.442276   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:43.442282   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:43.442366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:43.475071   66919 cri.go:89] found id: ""
	I0815 01:32:43.475103   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.475113   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:43.475123   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:43.475189   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:43.508497   66919 cri.go:89] found id: ""
	I0815 01:32:43.508526   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.508536   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:43.508543   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:43.508601   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:43.544292   66919 cri.go:89] found id: ""
	I0815 01:32:43.544315   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.544322   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:43.544328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:43.544390   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:43.582516   66919 cri.go:89] found id: ""
	I0815 01:32:43.582544   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.582556   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:43.582567   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:43.582583   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:43.633821   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:43.633853   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.647453   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:43.647478   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:43.715818   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:43.715839   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:43.715850   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:43.798131   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:43.798167   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.262028   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:42.262223   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:44.760964   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:41.813262   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:43.813491   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:45.751794   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:47.751852   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:49.752186   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:46.337867   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:46.364553   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:46.364629   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:46.426611   66919 cri.go:89] found id: ""
	I0815 01:32:46.426642   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.426654   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:46.426662   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:46.426724   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:46.461160   66919 cri.go:89] found id: ""
	I0815 01:32:46.461194   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.461201   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:46.461206   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:46.461262   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:46.492542   66919 cri.go:89] found id: ""
	I0815 01:32:46.492566   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.492576   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:46.492583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:46.492643   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:46.526035   66919 cri.go:89] found id: ""
	I0815 01:32:46.526060   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.526068   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:46.526075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:46.526131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:46.558867   66919 cri.go:89] found id: ""
	I0815 01:32:46.558895   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.558903   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:46.558909   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:46.558969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:46.593215   66919 cri.go:89] found id: ""
	I0815 01:32:46.593243   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.593258   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:46.593264   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:46.593345   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:46.626683   66919 cri.go:89] found id: ""
	I0815 01:32:46.626710   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.626720   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:46.626727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:46.626786   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:46.660687   66919 cri.go:89] found id: ""
	I0815 01:32:46.660716   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.660727   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:46.660738   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:46.660754   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:46.710639   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:46.710670   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:46.723378   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:46.723402   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:46.790906   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:46.790931   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:46.790946   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:46.876843   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:46.876877   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:49.421563   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:49.434606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:49.434688   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:49.468855   66919 cri.go:89] found id: ""
	I0815 01:32:49.468884   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.468895   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:49.468900   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:49.468958   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:49.507477   66919 cri.go:89] found id: ""
	I0815 01:32:49.507507   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.507519   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:49.507526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:49.507586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:49.539825   66919 cri.go:89] found id: ""
	I0815 01:32:49.539855   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.539866   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:49.539873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:49.539925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:49.570812   66919 cri.go:89] found id: ""
	I0815 01:32:49.570841   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.570851   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:49.570858   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:49.570910   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:49.604327   66919 cri.go:89] found id: ""
	I0815 01:32:49.604356   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.604367   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:49.604374   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:49.604456   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:49.640997   66919 cri.go:89] found id: ""
	I0815 01:32:49.641029   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.641042   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:49.641051   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:49.641116   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:49.673274   66919 cri.go:89] found id: ""
	I0815 01:32:49.673303   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.673314   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:49.673322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:49.673381   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:49.708863   66919 cri.go:89] found id: ""
	I0815 01:32:49.708890   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.708897   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:49.708905   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:49.708916   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:49.759404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:49.759431   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:49.773401   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:49.773429   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:49.842512   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:49.842539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:49.842557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:49.923996   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:49.924030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:46.760999   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:48.762058   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:46.312409   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:48.313081   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:51.752324   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:53.752358   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:52.459672   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:52.472149   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:52.472218   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:52.508168   66919 cri.go:89] found id: ""
	I0815 01:32:52.508193   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.508202   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:52.508207   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:52.508260   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:52.543741   66919 cri.go:89] found id: ""
	I0815 01:32:52.543770   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.543788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:52.543796   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:52.543850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:52.575833   66919 cri.go:89] found id: ""
	I0815 01:32:52.575865   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.575876   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:52.575883   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:52.575950   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:52.607593   66919 cri.go:89] found id: ""
	I0815 01:32:52.607627   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.607638   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:52.607645   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:52.607705   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:52.641726   66919 cri.go:89] found id: ""
	I0815 01:32:52.641748   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.641757   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:52.641763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:52.641820   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:52.673891   66919 cri.go:89] found id: ""
	I0815 01:32:52.673918   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.673926   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:52.673932   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:52.673989   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:52.705405   66919 cri.go:89] found id: ""
	I0815 01:32:52.705465   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.705479   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:52.705488   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:52.705683   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:52.739413   66919 cri.go:89] found id: ""
	I0815 01:32:52.739442   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.739455   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:52.739466   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:52.739481   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:52.791891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:52.791926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:52.806154   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:52.806184   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:52.871807   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:52.871833   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:52.871848   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:52.955257   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:52.955299   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:51.261339   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:53.760453   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:50.811954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:52.814155   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.315451   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.753146   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:58.251418   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.498326   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:55.511596   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:55.511674   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:55.545372   66919 cri.go:89] found id: ""
	I0815 01:32:55.545397   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.545405   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:55.545410   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:55.545469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:55.578661   66919 cri.go:89] found id: ""
	I0815 01:32:55.578687   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.578699   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:55.578706   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:55.578774   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:55.612071   66919 cri.go:89] found id: ""
	I0815 01:32:55.612096   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.612104   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:55.612109   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:55.612167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:55.647842   66919 cri.go:89] found id: ""
	I0815 01:32:55.647870   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.647879   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:55.647884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:55.647946   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:55.683145   66919 cri.go:89] found id: ""
	I0815 01:32:55.683171   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.683179   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:55.683185   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:55.683237   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:55.716485   66919 cri.go:89] found id: ""
	I0815 01:32:55.716513   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.716524   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:55.716529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:55.716588   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:55.751649   66919 cri.go:89] found id: ""
	I0815 01:32:55.751673   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.751681   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:55.751689   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:55.751748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:55.786292   66919 cri.go:89] found id: ""
	I0815 01:32:55.786322   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.786333   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:55.786345   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:55.786362   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:55.837633   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:55.837680   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:55.851624   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:55.851697   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:55.920496   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:55.920518   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:55.920532   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:55.998663   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:55.998700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:58.538202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:58.550630   66919 kubeadm.go:597] duration metric: took 4m4.454171061s to restartPrimaryControlPlane
	W0815 01:32:58.550719   66919 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:32:58.550763   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:32:55.760913   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:57.761301   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:57.812542   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:59.812797   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:00.251492   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.751937   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.968200   66919 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.417406165s)
	I0815 01:33:02.968273   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:02.984328   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:02.994147   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:03.003703   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:03.003745   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:03.003799   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:03.012560   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:03.012629   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:03.021480   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:03.030121   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:03.030185   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:03.039216   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.047790   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:03.047854   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.056508   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:03.065001   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:03.065059   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:03.073818   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:03.286102   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:33:00.260884   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.261081   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:04.261431   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.312430   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:04.811970   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:05.252564   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:07.751944   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:09.752232   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:06.262039   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:08.760900   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:06.812188   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:08.812782   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.752403   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:14.251873   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.261490   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:13.760541   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.312341   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:13.313036   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:16.252242   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:18.252528   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:15.761353   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:18.261298   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:15.812234   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:17.812936   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.312284   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.752195   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:23.253836   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.262317   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:22.760573   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:24.760639   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:22.812596   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:25.313723   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:25.751279   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.751900   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.260523   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:29.261069   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.314902   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:29.812210   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:30.306422   67000 pod_ready.go:81] duration metric: took 4m0.000133706s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" ...
	E0815 01:33:30.306452   67000 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 01:33:30.306487   67000 pod_ready.go:38] duration metric: took 4m9.54037853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:33:30.306516   67000 kubeadm.go:597] duration metric: took 4m18.620065579s to restartPrimaryControlPlane
	W0815 01:33:30.306585   67000 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:33:30.306616   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:33:30.251274   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:32.251733   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:34.261342   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:31.261851   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:33.760731   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:36.752156   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:39.251042   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:35.761425   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:38.260168   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:41.252730   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:43.751914   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:40.260565   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:42.261544   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:44.263225   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:45.752581   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:48.251003   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:46.760884   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:49.259955   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:50.251655   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:52.751031   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:52.751064   67451 pod_ready.go:81] duration metric: took 4m0.00559932s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	E0815 01:33:52.751076   67451 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 01:33:52.751088   67451 pod_ready.go:38] duration metric: took 4m2.403367614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:33:52.751108   67451 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:33:52.751143   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:33:52.751205   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:33:52.795646   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:52.795671   67451 cri.go:89] found id: ""
	I0815 01:33:52.795680   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:33:52.795738   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.800301   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:33:52.800378   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:33:52.832704   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:52.832723   67451 cri.go:89] found id: ""
	I0815 01:33:52.832731   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:33:52.832789   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.836586   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:33:52.836647   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:33:52.871782   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:52.871806   67451 cri.go:89] found id: ""
	I0815 01:33:52.871814   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:33:52.871865   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.875939   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:33:52.876003   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:33:52.911531   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:52.911559   67451 cri.go:89] found id: ""
	I0815 01:33:52.911568   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:33:52.911618   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.915944   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:33:52.916044   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:33:52.950344   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:52.950370   67451 cri.go:89] found id: ""
	I0815 01:33:52.950379   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:33:52.950429   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.954361   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:33:52.954423   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:33:52.988534   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:52.988560   67451 cri.go:89] found id: ""
	I0815 01:33:52.988568   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:33:52.988614   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.992310   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:33:52.992362   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:33:53.024437   67451 cri.go:89] found id: ""
	I0815 01:33:53.024464   67451 logs.go:276] 0 containers: []
	W0815 01:33:53.024472   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:33:53.024477   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:33:53.024540   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:33:53.065265   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:53.065294   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:53.065300   67451 cri.go:89] found id: ""
	I0815 01:33:53.065309   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:33:53.065371   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:53.069355   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:53.073218   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:33:53.073241   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:53.111718   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:33:53.111748   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:53.168887   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:33:53.168916   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:53.205011   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:33:53.205047   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:53.236754   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:33:53.236783   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:33:53.717444   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:33:53.717479   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:33:53.730786   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:33:53.730822   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:53.772883   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:33:53.772915   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:53.811011   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:33:53.811045   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:33:53.850482   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:33:53.850537   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:53.884061   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:33:53.884094   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:33:53.953586   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:33:53.953621   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:33:54.074305   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:33:54.074345   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:51.261543   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:53.761698   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:56.568636   67000 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.261991635s)
	I0815 01:33:56.568725   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:56.585102   67000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:56.595265   67000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:56.606275   67000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:56.606302   67000 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:56.606346   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:56.614847   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:56.614909   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:56.624087   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:56.635940   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:56.635996   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:56.648778   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:56.659984   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:56.660048   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:56.670561   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:56.680716   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:56.680770   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:56.691582   67000 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:56.744053   67000 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:33:56.744448   67000 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:33:56.859803   67000 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:33:56.859986   67000 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:33:56.860126   67000 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:33:56.870201   67000 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:33:56.872775   67000 out.go:204]   - Generating certificates and keys ...
	I0815 01:33:56.872875   67000 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:33:56.872957   67000 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:33:56.873055   67000 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:33:56.873134   67000 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:33:56.873222   67000 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:33:56.873302   67000 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:33:56.873391   67000 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:33:56.873474   67000 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:33:56.873577   67000 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:33:56.873686   67000 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:33:56.873745   67000 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:33:56.873823   67000 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:33:56.993607   67000 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:33:57.204419   67000 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:33:57.427518   67000 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:33:57.816802   67000 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:33:57.976885   67000 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:33:57.977545   67000 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:33:57.980898   67000 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:33:56.622543   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:33:56.645990   67451 api_server.go:72] duration metric: took 4m13.53998694s to wait for apiserver process to appear ...
	I0815 01:33:56.646016   67451 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:33:56.646059   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:33:56.646118   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:33:56.690122   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:56.690169   67451 cri.go:89] found id: ""
	I0815 01:33:56.690180   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:33:56.690253   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.694647   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:33:56.694702   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:33:56.732231   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:56.732269   67451 cri.go:89] found id: ""
	I0815 01:33:56.732279   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:33:56.732341   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.736567   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:33:56.736642   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:33:56.776792   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:56.776816   67451 cri.go:89] found id: ""
	I0815 01:33:56.776827   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:33:56.776886   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.781131   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:33:56.781200   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:33:56.814488   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:56.814514   67451 cri.go:89] found id: ""
	I0815 01:33:56.814524   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:33:56.814598   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.818456   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:33:56.818518   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:33:56.872968   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:56.872988   67451 cri.go:89] found id: ""
	I0815 01:33:56.872998   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:33:56.873059   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.877393   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:33:56.877459   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:33:56.918072   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:56.918169   67451 cri.go:89] found id: ""
	I0815 01:33:56.918185   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:33:56.918247   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.923442   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:33:56.923508   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:33:56.960237   67451 cri.go:89] found id: ""
	I0815 01:33:56.960263   67451 logs.go:276] 0 containers: []
	W0815 01:33:56.960271   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:33:56.960276   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:33:56.960339   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:33:56.995156   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:56.995184   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:56.995189   67451 cri.go:89] found id: ""
	I0815 01:33:56.995195   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:33:56.995253   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.999496   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:57.004450   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:33:57.004478   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:33:57.082294   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:33:57.082336   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:33:57.098629   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:33:57.098662   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:57.132282   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:33:57.132314   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:57.166448   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:33:57.166482   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:57.198997   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:33:57.199027   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:57.232713   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:33:57.232746   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:33:57.684565   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:33:57.684601   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:33:57.736700   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:33:57.736734   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:33:57.847294   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:33:57.847320   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:57.896696   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:33:57.896725   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:57.940766   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:33:57.940799   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:57.979561   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:33:57.979586   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:56.260814   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:58.760911   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:57.982527   67000 out.go:204]   - Booting up control plane ...
	I0815 01:33:57.982632   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:33:57.982740   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:33:57.982828   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:33:58.009596   67000 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:33:58.019089   67000 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:33:58.019165   67000 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:33:58.152279   67000 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:33:58.152459   67000 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:33:58.652446   67000 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.333422ms
	I0815 01:33:58.652548   67000 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:34:03.655057   67000 kubeadm.go:310] [api-check] The API server is healthy after 5.002436765s
	I0815 01:34:03.667810   67000 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:34:03.684859   67000 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:34:03.711213   67000 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:34:03.711523   67000 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-190398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:34:03.722147   67000 kubeadm.go:310] [bootstrap-token] Using token: rpl4uv.hjs6pd4939cxws48
	I0815 01:34:00.548574   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:34:00.554825   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 200:
	ok
	I0815 01:34:00.556191   67451 api_server.go:141] control plane version: v1.31.0
	I0815 01:34:00.556215   67451 api_server.go:131] duration metric: took 3.910191173s to wait for apiserver health ...
	I0815 01:34:00.556225   67451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:34:00.556253   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:34:00.556316   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:34:00.603377   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:34:00.603404   67451 cri.go:89] found id: ""
	I0815 01:34:00.603413   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:34:00.603471   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.608674   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:34:00.608747   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:34:00.660318   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:34:00.660346   67451 cri.go:89] found id: ""
	I0815 01:34:00.660355   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:34:00.660450   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.664411   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:34:00.664483   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:34:00.710148   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:34:00.710178   67451 cri.go:89] found id: ""
	I0815 01:34:00.710188   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:34:00.710255   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.714877   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:34:00.714936   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:34:00.750324   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:34:00.750352   67451 cri.go:89] found id: ""
	I0815 01:34:00.750361   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:34:00.750423   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.754304   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:34:00.754377   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:34:00.797956   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:34:00.797980   67451 cri.go:89] found id: ""
	I0815 01:34:00.797989   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:34:00.798053   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.802260   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:34:00.802362   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:34:00.841502   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:34:00.841529   67451 cri.go:89] found id: ""
	I0815 01:34:00.841539   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:34:00.841599   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.845398   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:34:00.845454   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:34:00.882732   67451 cri.go:89] found id: ""
	I0815 01:34:00.882769   67451 logs.go:276] 0 containers: []
	W0815 01:34:00.882779   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:34:00.882786   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:34:00.882855   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:34:00.924913   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:34:00.924942   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:34:00.924948   67451 cri.go:89] found id: ""
	I0815 01:34:00.924958   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:34:00.925019   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.929047   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.932838   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:34:00.932862   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:34:00.975515   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:34:00.975544   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:34:01.041578   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:34:01.041616   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:34:01.083548   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:34:01.083584   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:34:01.181982   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:34:01.182028   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:34:01.197180   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:34:01.197222   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:34:01.296173   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:34:01.296215   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:34:01.348591   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:34:01.348621   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:34:01.385258   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:34:01.385290   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:34:01.760172   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:34:01.760228   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:34:01.811334   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:34:01.811371   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:34:01.855563   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:34:01.855602   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:34:01.891834   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:34:01.891871   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:34:04.440542   67451 system_pods.go:59] 8 kube-system pods found
	I0815 01:34:04.440582   67451 system_pods.go:61] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running
	I0815 01:34:04.440590   67451 system_pods.go:61] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running
	I0815 01:34:04.440596   67451 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running
	I0815 01:34:04.440602   67451 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running
	I0815 01:34:04.440607   67451 system_pods.go:61] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:34:04.440612   67451 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running
	I0815 01:34:04.440622   67451 system_pods.go:61] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:04.440627   67451 system_pods.go:61] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:34:04.440636   67451 system_pods.go:74] duration metric: took 3.884405315s to wait for pod list to return data ...
	I0815 01:34:04.440643   67451 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:34:04.443705   67451 default_sa.go:45] found service account: "default"
	I0815 01:34:04.443728   67451 default_sa.go:55] duration metric: took 3.078997ms for default service account to be created ...
	I0815 01:34:04.443736   67451 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:34:04.451338   67451 system_pods.go:86] 8 kube-system pods found
	I0815 01:34:04.451370   67451 system_pods.go:89] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running
	I0815 01:34:04.451379   67451 system_pods.go:89] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running
	I0815 01:34:04.451386   67451 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running
	I0815 01:34:04.451394   67451 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running
	I0815 01:34:04.451401   67451 system_pods.go:89] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:34:04.451408   67451 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running
	I0815 01:34:04.451419   67451 system_pods.go:89] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:04.451430   67451 system_pods.go:89] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:34:04.451443   67451 system_pods.go:126] duration metric: took 7.701241ms to wait for k8s-apps to be running ...
	I0815 01:34:04.451455   67451 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:34:04.451507   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:04.468766   67451 system_svc.go:56] duration metric: took 17.300221ms WaitForService to wait for kubelet
	I0815 01:34:04.468801   67451 kubeadm.go:582] duration metric: took 4m21.362801315s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:34:04.468832   67451 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:34:04.472507   67451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:34:04.472531   67451 node_conditions.go:123] node cpu capacity is 2
	I0815 01:34:04.472542   67451 node_conditions.go:105] duration metric: took 3.704147ms to run NodePressure ...
	I0815 01:34:04.472565   67451 start.go:241] waiting for startup goroutines ...
	I0815 01:34:04.472575   67451 start.go:246] waiting for cluster config update ...
	I0815 01:34:04.472588   67451 start.go:255] writing updated cluster config ...
	I0815 01:34:04.472865   67451 ssh_runner.go:195] Run: rm -f paused
	I0815 01:34:04.527726   67451 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:34:04.529173   67451 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-018537" cluster and "default" namespace by default
	I0815 01:34:03.723380   67000 out.go:204]   - Configuring RBAC rules ...
	I0815 01:34:03.723547   67000 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:34:03.729240   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:34:03.737279   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:34:03.740490   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:34:03.747717   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:34:03.751107   67000 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:34:04.063063   67000 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:34:04.490218   67000 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:34:05.062068   67000 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:34:05.065926   67000 kubeadm.go:310] 
	I0815 01:34:05.065991   67000 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:34:05.066017   67000 kubeadm.go:310] 
	I0815 01:34:05.066103   67000 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:34:05.066114   67000 kubeadm.go:310] 
	I0815 01:34:05.066148   67000 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:34:05.066211   67000 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:34:05.066286   67000 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:34:05.066298   67000 kubeadm.go:310] 
	I0815 01:34:05.066368   67000 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:34:05.066377   67000 kubeadm.go:310] 
	I0815 01:34:05.066416   67000 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:34:05.066423   67000 kubeadm.go:310] 
	I0815 01:34:05.066499   67000 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:34:05.066602   67000 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:34:05.066692   67000 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:34:05.066699   67000 kubeadm.go:310] 
	I0815 01:34:05.066766   67000 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:34:05.066829   67000 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:34:05.066835   67000 kubeadm.go:310] 
	I0815 01:34:05.066958   67000 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rpl4uv.hjs6pd4939cxws48 \
	I0815 01:34:05.067094   67000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:34:05.067122   67000 kubeadm.go:310] 	--control-plane 
	I0815 01:34:05.067130   67000 kubeadm.go:310] 
	I0815 01:34:05.067246   67000 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:34:05.067257   67000 kubeadm.go:310] 
	I0815 01:34:05.067360   67000 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rpl4uv.hjs6pd4939cxws48 \
	I0815 01:34:05.067496   67000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:34:05.068747   67000 kubeadm.go:310] W0815 01:33:56.716635    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:05.069045   67000 kubeadm.go:310] W0815 01:33:56.717863    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:05.069191   67000 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:05.069220   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:34:05.069231   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:34:05.070969   67000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:34:00.761976   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:03.263360   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:05.072063   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:34:05.081962   67000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:34:05.106105   67000 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:34:05.106173   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:05.106224   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-190398 minikube.k8s.io/updated_at=2024_08_15T01_34_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=embed-certs-190398 minikube.k8s.io/primary=true
	I0815 01:34:05.282543   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:05.282564   67000 ops.go:34] apiserver oom_adj: -16
	I0815 01:34:05.783320   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:06.282990   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:06.782692   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:07.283083   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:07.783174   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:08.283580   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:08.783293   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:09.282718   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:09.384394   67000 kubeadm.go:1113] duration metric: took 4.278268585s to wait for elevateKubeSystemPrivileges
	I0815 01:34:09.384433   67000 kubeadm.go:394] duration metric: took 4m57.749730888s to StartCluster
	I0815 01:34:09.384454   67000 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:09.384550   67000 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:34:09.386694   67000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:09.386961   67000 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:34:09.387019   67000 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:34:09.387099   67000 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-190398"
	I0815 01:34:09.387109   67000 addons.go:69] Setting default-storageclass=true in profile "embed-certs-190398"
	I0815 01:34:09.387133   67000 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-190398"
	I0815 01:34:09.387144   67000 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-190398"
	W0815 01:34:09.387147   67000 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:34:09.387165   67000 addons.go:69] Setting metrics-server=true in profile "embed-certs-190398"
	I0815 01:34:09.387178   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.387189   67000 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:34:09.387205   67000 addons.go:234] Setting addon metrics-server=true in "embed-certs-190398"
	W0815 01:34:09.387216   67000 addons.go:243] addon metrics-server should already be in state true
	I0815 01:34:09.387253   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.387571   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387601   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.387577   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387681   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387729   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.387799   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.388556   67000 out.go:177] * Verifying Kubernetes components...
	I0815 01:34:09.389872   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:34:09.404358   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0815 01:34:09.404925   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.405016   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0815 01:34:09.405505   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.405526   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.405530   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.405878   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.405982   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.405993   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.406352   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.406418   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I0815 01:34:09.406460   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.406477   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.406755   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.406839   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.406876   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.407171   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.407189   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.407518   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.407712   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.412572   67000 addons.go:234] Setting addon default-storageclass=true in "embed-certs-190398"
	W0815 01:34:09.412597   67000 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:34:09.412626   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.413018   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.413049   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.427598   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0815 01:34:09.428087   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.428619   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.428645   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.429079   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.429290   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.430391   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34763
	I0815 01:34:09.430978   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.431199   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.431477   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.431489   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.431839   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.431991   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.433073   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0815 01:34:09.433473   67000 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:34:09.433726   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.433849   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.434259   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.434433   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.434786   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.434987   67000 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:09.435005   67000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:34:09.435026   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.435675   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.435700   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.435887   67000 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:34:05.760130   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:07.760774   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:09.762245   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:09.437621   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:34:09.437643   67000 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:34:09.437664   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.438723   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.439409   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.439431   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.439685   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.439898   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.440245   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.440419   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.440609   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.441353   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.441380   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.441558   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.441712   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.441859   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.441957   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.455864   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
	I0815 01:34:09.456238   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.456858   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.456878   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.457179   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.457413   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.459002   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.459268   67000 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:09.459282   67000 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:34:09.459296   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.461784   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.462170   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.462203   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.462317   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.462491   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.462631   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.462772   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.602215   67000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:34:09.621687   67000 node_ready.go:35] waiting up to 6m0s for node "embed-certs-190398" to be "Ready" ...
	I0815 01:34:09.635114   67000 node_ready.go:49] node "embed-certs-190398" has status "Ready":"True"
	I0815 01:34:09.635146   67000 node_ready.go:38] duration metric: took 13.422205ms for node "embed-certs-190398" to be "Ready" ...
	I0815 01:34:09.635169   67000 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:09.642293   67000 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:09.681219   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:34:09.681242   67000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:34:09.725319   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:34:09.725353   67000 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:34:09.725445   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:09.758901   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:09.758973   67000 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:34:09.809707   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:09.831765   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:10.013580   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.013607   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.013902   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:10.013933   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.013950   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.013968   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.013979   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.014212   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.014227   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.023286   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.023325   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.023618   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.023643   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.023655   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.121834   67000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.312088989s)
	I0815 01:34:11.121883   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.121896   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.122269   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.122304   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.122324   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.122340   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.122354   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.122588   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.122605   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.183170   67000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.351356186s)
	I0815 01:34:11.183232   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.183248   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.183588   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.183604   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.183608   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.183619   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.183627   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.183989   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.184017   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.184031   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.184053   67000 addons.go:475] Verifying addon metrics-server=true in "embed-certs-190398"
	I0815 01:34:11.186460   67000 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0815 01:34:12.261636   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.763849   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:11.187572   67000 addons.go:510] duration metric: took 1.800554463s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0815 01:34:11.653997   67000 pod_ready.go:102] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.149672   67000 pod_ready.go:102] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.652753   67000 pod_ready.go:92] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:14.652782   67000 pod_ready.go:81] duration metric: took 5.0104594s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:14.652794   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:16.662387   67000 pod_ready.go:102] pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:17.158847   67000 pod_ready.go:92] pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.158877   67000 pod_ready.go:81] duration metric: took 2.50607523s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.158895   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.163274   67000 pod_ready.go:92] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.163295   67000 pod_ready.go:81] duration metric: took 4.392165ms for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.163307   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7hfvr" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.167416   67000 pod_ready.go:92] pod "kube-proxy-7hfvr" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.167436   67000 pod_ready.go:81] duration metric: took 4.122023ms for pod "kube-proxy-7hfvr" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.167447   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.171559   67000 pod_ready.go:92] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.171578   67000 pod_ready.go:81] duration metric: took 4.12361ms for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.171587   67000 pod_ready.go:38] duration metric: took 7.536405023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:17.171605   67000 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:34:17.171665   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:34:17.187336   67000 api_server.go:72] duration metric: took 7.800338922s to wait for apiserver process to appear ...
	I0815 01:34:17.187359   67000 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:34:17.187379   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:34:17.191804   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0815 01:34:17.192705   67000 api_server.go:141] control plane version: v1.31.0
	I0815 01:34:17.192726   67000 api_server.go:131] duration metric: took 5.35969ms to wait for apiserver health ...
	I0815 01:34:17.192739   67000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:34:17.197588   67000 system_pods.go:59] 9 kube-system pods found
	I0815 01:34:17.197618   67000 system_pods.go:61] "coredns-6f6b679f8f-kmmdc" [455019d9-07b5-418e-8668-26272424e96c] Running
	I0815 01:34:17.197626   67000 system_pods.go:61] "coredns-6f6b679f8f-kx2xv" [81e26858-a527-4f0d-a7fd-e5c3f82b29bc] Running
	I0815 01:34:17.197632   67000 system_pods.go:61] "etcd-embed-certs-190398" [0767f386-4cff-4c02-9c5c-ec334dd15d3d] Running
	I0815 01:34:17.197638   67000 system_pods.go:61] "kube-apiserver-embed-certs-190398" [737db54b-50eb-4fea-93a0-7e95d645b77f] Running
	I0815 01:34:17.197644   67000 system_pods.go:61] "kube-controller-manager-embed-certs-190398" [4767eb26-47a6-4dfd-833a-a4e18a57cb7e] Running
	I0815 01:34:17.197649   67000 system_pods.go:61] "kube-proxy-7hfvr" [ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0] Running
	I0815 01:34:17.197655   67000 system_pods.go:61] "kube-scheduler-embed-certs-190398" [0ffcf10e-304e-4837-bd6f-c3b78193b378] Running
	I0815 01:34:17.197665   67000 system_pods.go:61] "metrics-server-6867b74b74-4ldv7" [ea1c5492-373d-445c-a135-b91569186449] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:17.197676   67000 system_pods.go:61] "storage-provisioner" [002656ed-b542-442d-9409-6f0b5cf557dc] Running
	I0815 01:34:17.197688   67000 system_pods.go:74] duration metric: took 4.940904ms to wait for pod list to return data ...
	I0815 01:34:17.197699   67000 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:34:17.200172   67000 default_sa.go:45] found service account: "default"
	I0815 01:34:17.200190   67000 default_sa.go:55] duration metric: took 2.484111ms for default service account to be created ...
	I0815 01:34:17.200198   67000 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:34:17.359981   67000 system_pods.go:86] 9 kube-system pods found
	I0815 01:34:17.360011   67000 system_pods.go:89] "coredns-6f6b679f8f-kmmdc" [455019d9-07b5-418e-8668-26272424e96c] Running
	I0815 01:34:17.360019   67000 system_pods.go:89] "coredns-6f6b679f8f-kx2xv" [81e26858-a527-4f0d-a7fd-e5c3f82b29bc] Running
	I0815 01:34:17.360025   67000 system_pods.go:89] "etcd-embed-certs-190398" [0767f386-4cff-4c02-9c5c-ec334dd15d3d] Running
	I0815 01:34:17.360030   67000 system_pods.go:89] "kube-apiserver-embed-certs-190398" [737db54b-50eb-4fea-93a0-7e95d645b77f] Running
	I0815 01:34:17.360036   67000 system_pods.go:89] "kube-controller-manager-embed-certs-190398" [4767eb26-47a6-4dfd-833a-a4e18a57cb7e] Running
	I0815 01:34:17.360042   67000 system_pods.go:89] "kube-proxy-7hfvr" [ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0] Running
	I0815 01:34:17.360047   67000 system_pods.go:89] "kube-scheduler-embed-certs-190398" [0ffcf10e-304e-4837-bd6f-c3b78193b378] Running
	I0815 01:34:17.360058   67000 system_pods.go:89] "metrics-server-6867b74b74-4ldv7" [ea1c5492-373d-445c-a135-b91569186449] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:17.360065   67000 system_pods.go:89] "storage-provisioner" [002656ed-b542-442d-9409-6f0b5cf557dc] Running
	I0815 01:34:17.360078   67000 system_pods.go:126] duration metric: took 159.873802ms to wait for k8s-apps to be running ...
	I0815 01:34:17.360091   67000 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:34:17.360143   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:17.374912   67000 system_svc.go:56] duration metric: took 14.811351ms WaitForService to wait for kubelet
	I0815 01:34:17.374948   67000 kubeadm.go:582] duration metric: took 7.987952187s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:34:17.374977   67000 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:34:17.557650   67000 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:34:17.557681   67000 node_conditions.go:123] node cpu capacity is 2
	I0815 01:34:17.557694   67000 node_conditions.go:105] duration metric: took 182.710819ms to run NodePressure ...
	I0815 01:34:17.557706   67000 start.go:241] waiting for startup goroutines ...
	I0815 01:34:17.557716   67000 start.go:246] waiting for cluster config update ...
	I0815 01:34:17.557728   67000 start.go:255] writing updated cluster config ...
	I0815 01:34:17.557999   67000 ssh_runner.go:195] Run: rm -f paused
	I0815 01:34:17.605428   67000 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:34:17.607344   67000 out.go:177] * Done! kubectl is now configured to use "embed-certs-190398" cluster and "default" namespace by default
	I0815 01:34:17.260406   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:19.260601   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:19.754935   66492 pod_ready.go:81] duration metric: took 4m0.000339545s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" ...
	E0815 01:34:19.754964   66492 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 01:34:19.754984   66492 pod_ready.go:38] duration metric: took 4m6.506948914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:19.755018   66492 kubeadm.go:597] duration metric: took 4m13.922875877s to restartPrimaryControlPlane
	W0815 01:34:19.755082   66492 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:34:19.755112   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:34:45.859009   66492 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.103872856s)
	I0815 01:34:45.859088   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:45.875533   66492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:34:45.885287   66492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:34:45.897067   66492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:34:45.897087   66492 kubeadm.go:157] found existing configuration files:
	
	I0815 01:34:45.897137   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:34:45.907073   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:34:45.907145   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:34:45.916110   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:34:45.925269   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:34:45.925330   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:34:45.934177   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:34:45.942464   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:34:45.942524   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:34:45.951504   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:34:45.961107   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:34:45.961159   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:34:45.970505   66492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:34:46.018530   66492 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:34:46.018721   66492 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:46.125710   66492 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:46.125846   66492 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:46.125961   66492 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:34:46.134089   66492 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:46.135965   66492 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:46.136069   66492 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:46.136157   66492 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:46.136256   66492 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:46.136333   66492 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:46.136442   66492 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:46.136528   66492 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:46.136614   66492 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:46.136736   66492 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:46.136845   66492 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:46.136946   66492 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:46.137066   66492 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:46.137143   66492 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:46.289372   66492 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:46.547577   66492 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:34:46.679039   66492 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:47.039625   66492 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:47.355987   66492 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:47.356514   66492 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:47.359155   66492 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:47.360813   66492 out.go:204]   - Booting up control plane ...
	I0815 01:34:47.360924   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:47.361018   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:47.361140   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:47.386603   66492 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:47.395339   66492 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:47.395391   66492 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:47.526381   66492 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:34:47.526512   66492 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:34:48.027552   66492 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.152677ms
	I0815 01:34:48.027674   66492 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:34:53.029526   66492 kubeadm.go:310] [api-check] The API server is healthy after 5.001814093s
	I0815 01:34:53.043123   66492 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:34:53.061171   66492 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:34:53.093418   66492 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:34:53.093680   66492 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-884893 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:34:53.106103   66492 kubeadm.go:310] [bootstrap-token] Using token: rd520d.rc6325cjita43il4
	I0815 01:34:53.107576   66492 out.go:204]   - Configuring RBAC rules ...
	I0815 01:34:53.107717   66492 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:34:53.112060   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:34:53.122816   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:34:53.126197   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:34:53.129304   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:34:53.133101   66492 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:34:53.436427   66492 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:34:53.891110   66492 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:34:54.439955   66492 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:34:54.441369   66492 kubeadm.go:310] 
	I0815 01:34:54.441448   66492 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:34:54.441457   66492 kubeadm.go:310] 
	I0815 01:34:54.441550   66492 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:34:54.441578   66492 kubeadm.go:310] 
	I0815 01:34:54.441608   66492 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:34:54.441663   66492 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:34:54.441705   66492 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:34:54.441711   66492 kubeadm.go:310] 
	I0815 01:34:54.441777   66492 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:34:54.441784   66492 kubeadm.go:310] 
	I0815 01:34:54.441821   66492 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:34:54.441828   66492 kubeadm.go:310] 
	I0815 01:34:54.441867   66492 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:34:54.441977   66492 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:34:54.442054   66492 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:34:54.442061   66492 kubeadm.go:310] 
	I0815 01:34:54.442149   66492 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:34:54.442255   66492 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:34:54.442265   66492 kubeadm.go:310] 
	I0815 01:34:54.442384   66492 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rd520d.rc6325cjita43il4 \
	I0815 01:34:54.442477   66492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:34:54.442504   66492 kubeadm.go:310] 	--control-plane 
	I0815 01:34:54.442509   66492 kubeadm.go:310] 
	I0815 01:34:54.442591   66492 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:34:54.442598   66492 kubeadm.go:310] 
	I0815 01:34:54.442675   66492 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rd520d.rc6325cjita43il4 \
	I0815 01:34:54.442811   66492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:34:54.444409   66492 kubeadm.go:310] W0815 01:34:45.989583    3035 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:54.444785   66492 kubeadm.go:310] W0815 01:34:45.990491    3035 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:54.444929   66492 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:54.444951   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:34:54.444960   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:34:54.447029   66492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:34:54.448357   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:34:54.460176   66492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:34:54.479219   66492 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:34:54.479299   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:54.479342   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-884893 minikube.k8s.io/updated_at=2024_08_15T01_34_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=no-preload-884893 minikube.k8s.io/primary=true
	I0815 01:34:54.516528   66492 ops.go:34] apiserver oom_adj: -16
	I0815 01:34:54.686689   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:55.186918   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:55.687118   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:56.186740   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:56.687051   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:57.187582   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:57.687662   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:58.187633   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:58.686885   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:59.187093   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:59.280930   66492 kubeadm.go:1113] duration metric: took 4.801695567s to wait for elevateKubeSystemPrivileges
	I0815 01:34:59.280969   66492 kubeadm.go:394] duration metric: took 4m53.494095639s to StartCluster
	I0815 01:34:59.281006   66492 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:59.281099   66492 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:34:59.283217   66492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:59.283528   66492 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:34:59.283693   66492 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:34:59.283649   66492 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:34:59.283734   66492 addons.go:69] Setting storage-provisioner=true in profile "no-preload-884893"
	I0815 01:34:59.283743   66492 addons.go:69] Setting metrics-server=true in profile "no-preload-884893"
	I0815 01:34:59.283742   66492 addons.go:69] Setting default-storageclass=true in profile "no-preload-884893"
	I0815 01:34:59.283768   66492 addons.go:234] Setting addon metrics-server=true in "no-preload-884893"
	I0815 01:34:59.283770   66492 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-884893"
	I0815 01:34:59.283768   66492 addons.go:234] Setting addon storage-provisioner=true in "no-preload-884893"
	W0815 01:34:59.283882   66492 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:34:59.283912   66492 host.go:66] Checking if "no-preload-884893" exists ...
	W0815 01:34:59.283778   66492 addons.go:243] addon metrics-server should already be in state true
	I0815 01:34:59.283990   66492 host.go:66] Checking if "no-preload-884893" exists ...
	I0815 01:34:59.284206   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284238   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.284296   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284321   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.284333   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284347   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.285008   66492 out.go:177] * Verifying Kubernetes components...
	I0815 01:34:59.286336   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:34:59.302646   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I0815 01:34:59.302810   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0815 01:34:59.303084   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303243   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303327   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0815 01:34:59.303705   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.303724   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.303864   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303911   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.303939   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.304044   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304378   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.304397   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.304418   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304643   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.304695   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.304899   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.304912   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304926   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.305098   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.308826   66492 addons.go:234] Setting addon default-storageclass=true in "no-preload-884893"
	W0815 01:34:59.308848   66492 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:34:59.308878   66492 host.go:66] Checking if "no-preload-884893" exists ...
	I0815 01:34:59.309223   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.309255   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.320605   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0815 01:34:59.321021   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.321570   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.321591   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.321942   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.322163   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.323439   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0815 01:34:59.323779   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.324027   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.324168   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.324180   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.324446   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.324885   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.324914   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.325881   66492 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:34:59.326695   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0815 01:34:59.327054   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.327257   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:34:59.327286   66492 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:34:59.327304   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.327551   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.327567   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.327935   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.328243   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.330384   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.330975   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.331491   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.331519   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.331747   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.331916   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.331916   66492 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:34:59.563745   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:34:59.563904   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:34:59.565631   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:34:59.565711   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:59.565827   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:59.565968   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:59.566095   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:34:59.566195   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:59.567850   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:59.567922   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:59.567991   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:59.568091   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:59.568176   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:59.568283   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:59.568377   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:59.568466   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:59.568558   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:59.568674   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:59.568775   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:59.568834   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:59.568920   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:59.568998   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:59.569073   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:59.569162   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:59.569217   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:59.569330   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:59.569429   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:59.569482   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:59.569580   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:59.571031   66919 out.go:204]   - Booting up control plane ...
	I0815 01:34:59.571120   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:59.571198   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:59.571286   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:59.571396   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:59.571643   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:34:59.571729   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:34:59.571830   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572069   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572172   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572422   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572540   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572814   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572913   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573155   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573252   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573474   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573484   66919 kubeadm.go:310] 
	I0815 01:34:59.573543   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:34:59.573601   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:34:59.573610   66919 kubeadm.go:310] 
	I0815 01:34:59.573667   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:34:59.573713   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:34:59.573862   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:34:59.573878   66919 kubeadm.go:310] 
	I0815 01:34:59.574000   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:34:59.574051   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:34:59.574099   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:34:59.574109   66919 kubeadm.go:310] 
	I0815 01:34:59.574262   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:34:59.574379   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:34:59.574387   66919 kubeadm.go:310] 
	I0815 01:34:59.574509   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:34:59.574646   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:34:59.574760   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:34:59.574862   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:34:59.574880   66919 kubeadm.go:310] 
	W0815 01:34:59.574991   66919 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 01:34:59.575044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:35:00.029701   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:00.047125   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:35:00.057309   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:35:00.057336   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:35:00.057396   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:35:00.066837   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:35:00.066901   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:35:00.076722   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:35:00.086798   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:35:00.086862   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:35:00.097486   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.109900   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:35:00.109981   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.122672   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:34:59.332080   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.332258   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.333212   66492 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:59.333230   66492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:34:59.333246   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.336201   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.336699   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.336761   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.336791   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.336965   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.337146   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.337319   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.343978   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42433
	I0815 01:34:59.344425   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.344992   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.345015   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.345400   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.345595   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.347262   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.347490   66492 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:59.347507   66492 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:34:59.347525   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.350390   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.350876   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.350899   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.351072   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.351243   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.351418   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.351543   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.471077   66492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:34:59.500097   66492 node_ready.go:35] waiting up to 6m0s for node "no-preload-884893" to be "Ready" ...
	I0815 01:34:59.509040   66492 node_ready.go:49] node "no-preload-884893" has status "Ready":"True"
	I0815 01:34:59.509063   66492 node_ready.go:38] duration metric: took 8.924177ms for node "no-preload-884893" to be "Ready" ...
	I0815 01:34:59.509075   66492 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:59.515979   66492 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:59.594834   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:34:59.594856   66492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:34:59.597457   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:59.603544   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:59.637080   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:34:59.637109   66492 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:34:59.683359   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:59.683388   66492 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:34:59.730096   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:35:00.403252   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403287   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403477   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403495   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403789   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.403829   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.403850   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403858   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.403868   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403876   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.403891   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403900   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.404115   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.404156   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.404158   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.404162   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.404177   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.404164   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.433823   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.433876   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.434285   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.434398   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.434420   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.674979   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.675008   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.675371   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.675395   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.675421   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.675434   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.675443   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.675706   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.675722   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.675733   66492 addons.go:475] Verifying addon metrics-server=true in "no-preload-884893"
	I0815 01:35:00.677025   66492 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 01:35:00.134512   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:35:00.134579   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:35:00.146901   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:35:00.384725   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:35:00.678492   66492 addons.go:510] duration metric: took 1.394848534s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 01:35:01.522738   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:04.022711   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:06.522906   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:08.523426   66492 pod_ready.go:92] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.523453   66492 pod_ready.go:81] duration metric: took 9.007444319s for pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.523465   66492 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.528447   66492 pod_ready.go:92] pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.528471   66492 pod_ready.go:81] duration metric: took 4.997645ms for pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.528480   66492 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.533058   66492 pod_ready.go:92] pod "etcd-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.533078   66492 pod_ready.go:81] duration metric: took 4.59242ms for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.533088   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.537231   66492 pod_ready.go:92] pod "kube-apiserver-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.537252   66492 pod_ready.go:81] duration metric: took 4.154988ms for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.537261   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.541819   66492 pod_ready.go:92] pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.541840   66492 pod_ready.go:81] duration metric: took 4.572636ms for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.541852   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dpggv" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.920356   66492 pod_ready.go:92] pod "kube-proxy-dpggv" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.920394   66492 pod_ready.go:81] duration metric: took 378.534331ms for pod "kube-proxy-dpggv" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.920407   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:09.320112   66492 pod_ready.go:92] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:09.320135   66492 pod_ready.go:81] duration metric: took 399.72085ms for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:09.320143   66492 pod_ready.go:38] duration metric: took 9.811056504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:35:09.320158   66492 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:35:09.320216   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:35:09.336727   66492 api_server.go:72] duration metric: took 10.053160882s to wait for apiserver process to appear ...
	I0815 01:35:09.336760   66492 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:35:09.336777   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:35:09.340897   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 200:
	ok
	I0815 01:35:09.341891   66492 api_server.go:141] control plane version: v1.31.0
	I0815 01:35:09.341911   66492 api_server.go:131] duration metric: took 5.145922ms to wait for apiserver health ...
	I0815 01:35:09.341919   66492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:35:09.523808   66492 system_pods.go:59] 9 kube-system pods found
	I0815 01:35:09.523839   66492 system_pods.go:61] "coredns-6f6b679f8f-srq48" [e9520ab8-24d6-410d-bcba-b59e91e817a9] Running
	I0815 01:35:09.523844   66492 system_pods.go:61] "coredns-6f6b679f8f-t77b6" [fcdf11ef-28a6-428c-b033-e29b51af8f0e] Running
	I0815 01:35:09.523848   66492 system_pods.go:61] "etcd-no-preload-884893" [fa960cfe-331d-4656-93e9-a58921bd62de] Running
	I0815 01:35:09.523851   66492 system_pods.go:61] "kube-apiserver-no-preload-884893" [7a8244fb-aa58-4e8e-957a-f3fbd388837b] Running
	I0815 01:35:09.523857   66492 system_pods.go:61] "kube-controller-manager-no-preload-884893" [0b6c5424-6fe4-42b6-b081-4409f90db35f] Running
	I0815 01:35:09.523860   66492 system_pods.go:61] "kube-proxy-dpggv" [55ef2a4b-a502-452d-a3bd-df1209ff247b] Running
	I0815 01:35:09.523863   66492 system_pods.go:61] "kube-scheduler-no-preload-884893" [cd295ee0-1897-4cd3-896d-09dd36842248] Running
	I0815 01:35:09.523871   66492 system_pods.go:61] "metrics-server-6867b74b74-w47b2" [7423be62-ae01-4b3f-9e24-049f4788f32f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:35:09.523875   66492 system_pods.go:61] "storage-provisioner" [b4cf6d02-281f-4fb5-9ff7-c36143d3af58] Running
	I0815 01:35:09.523883   66492 system_pods.go:74] duration metric: took 181.959474ms to wait for pod list to return data ...
	I0815 01:35:09.523892   66492 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:35:09.720531   66492 default_sa.go:45] found service account: "default"
	I0815 01:35:09.720565   66492 default_sa.go:55] duration metric: took 196.667806ms for default service account to be created ...
	I0815 01:35:09.720574   66492 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:35:09.923419   66492 system_pods.go:86] 9 kube-system pods found
	I0815 01:35:09.923454   66492 system_pods.go:89] "coredns-6f6b679f8f-srq48" [e9520ab8-24d6-410d-bcba-b59e91e817a9] Running
	I0815 01:35:09.923463   66492 system_pods.go:89] "coredns-6f6b679f8f-t77b6" [fcdf11ef-28a6-428c-b033-e29b51af8f0e] Running
	I0815 01:35:09.923471   66492 system_pods.go:89] "etcd-no-preload-884893" [fa960cfe-331d-4656-93e9-a58921bd62de] Running
	I0815 01:35:09.923477   66492 system_pods.go:89] "kube-apiserver-no-preload-884893" [7a8244fb-aa58-4e8e-957a-f3fbd388837b] Running
	I0815 01:35:09.923484   66492 system_pods.go:89] "kube-controller-manager-no-preload-884893" [0b6c5424-6fe4-42b6-b081-4409f90db35f] Running
	I0815 01:35:09.923490   66492 system_pods.go:89] "kube-proxy-dpggv" [55ef2a4b-a502-452d-a3bd-df1209ff247b] Running
	I0815 01:35:09.923494   66492 system_pods.go:89] "kube-scheduler-no-preload-884893" [cd295ee0-1897-4cd3-896d-09dd36842248] Running
	I0815 01:35:09.923502   66492 system_pods.go:89] "metrics-server-6867b74b74-w47b2" [7423be62-ae01-4b3f-9e24-049f4788f32f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:35:09.923509   66492 system_pods.go:89] "storage-provisioner" [b4cf6d02-281f-4fb5-9ff7-c36143d3af58] Running
	I0815 01:35:09.923524   66492 system_pods.go:126] duration metric: took 202.943928ms to wait for k8s-apps to be running ...
	I0815 01:35:09.923533   66492 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:35:09.923586   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:09.938893   66492 system_svc.go:56] duration metric: took 15.353021ms WaitForService to wait for kubelet
	I0815 01:35:09.938917   66492 kubeadm.go:582] duration metric: took 10.655355721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:35:09.938942   66492 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:35:10.120692   66492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:35:10.120717   66492 node_conditions.go:123] node cpu capacity is 2
	I0815 01:35:10.120728   66492 node_conditions.go:105] duration metric: took 181.7794ms to run NodePressure ...
	I0815 01:35:10.120739   66492 start.go:241] waiting for startup goroutines ...
	I0815 01:35:10.120746   66492 start.go:246] waiting for cluster config update ...
	I0815 01:35:10.120754   66492 start.go:255] writing updated cluster config ...
	I0815 01:35:10.121019   66492 ssh_runner.go:195] Run: rm -f paused
	I0815 01:35:10.172726   66492 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:35:10.174631   66492 out.go:177] * Done! kubectl is now configured to use "no-preload-884893" cluster and "default" namespace by default
	I0815 01:36:56.608471   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:36:56.608611   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:36:56.610133   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:36:56.610200   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:36:56.610290   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:36:56.610405   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:36:56.610524   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:36:56.610616   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:36:56.612092   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:36:56.612184   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:36:56.612246   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:36:56.612314   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:36:56.612371   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:36:56.612431   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:36:56.612482   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:36:56.612534   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:36:56.612585   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:36:56.612697   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:36:56.612796   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:36:56.612859   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:36:56.613044   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:36:56.613112   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:36:56.613157   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:36:56.613244   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:36:56.613322   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:36:56.613455   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:36:56.613565   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:36:56.613631   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:36:56.613729   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:36:56.615023   66919 out.go:204]   - Booting up control plane ...
	I0815 01:36:56.615129   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:36:56.615203   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:36:56.615260   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:36:56.615330   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:36:56.615485   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:36:56.615542   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:36:56.615620   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.615805   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.615892   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616085   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616149   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616297   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616355   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616555   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616646   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616833   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616842   66919 kubeadm.go:310] 
	I0815 01:36:56.616873   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:36:56.616905   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:36:56.616912   66919 kubeadm.go:310] 
	I0815 01:36:56.616939   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:36:56.616969   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:36:56.617073   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:36:56.617089   66919 kubeadm.go:310] 
	I0815 01:36:56.617192   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:36:56.617220   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:36:56.617255   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:36:56.617263   66919 kubeadm.go:310] 
	I0815 01:36:56.617393   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:36:56.617469   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:36:56.617478   66919 kubeadm.go:310] 
	I0815 01:36:56.617756   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:36:56.617889   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:36:56.617967   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:36:56.618057   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:36:56.618070   66919 kubeadm.go:310] 
	I0815 01:36:56.618125   66919 kubeadm.go:394] duration metric: took 8m2.571608887s to StartCluster
	I0815 01:36:56.618169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:36:56.618222   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:36:56.659324   66919 cri.go:89] found id: ""
	I0815 01:36:56.659353   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.659365   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:36:56.659372   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:36:56.659443   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:36:56.695979   66919 cri.go:89] found id: ""
	I0815 01:36:56.696003   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.696010   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:36:56.696015   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:36:56.696063   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:36:56.730063   66919 cri.go:89] found id: ""
	I0815 01:36:56.730092   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.730100   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:36:56.730106   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:36:56.730161   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:36:56.763944   66919 cri.go:89] found id: ""
	I0815 01:36:56.763969   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.763983   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:36:56.763988   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:36:56.764047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:36:56.798270   66919 cri.go:89] found id: ""
	I0815 01:36:56.798299   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.798307   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:36:56.798313   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:36:56.798366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:36:56.832286   66919 cri.go:89] found id: ""
	I0815 01:36:56.832318   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.832328   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:36:56.832335   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:36:56.832410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:36:56.866344   66919 cri.go:89] found id: ""
	I0815 01:36:56.866380   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.866390   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:36:56.866398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:36:56.866461   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:36:56.904339   66919 cri.go:89] found id: ""
	I0815 01:36:56.904366   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.904375   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:36:56.904387   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:36:56.904405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:36:56.982024   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:36:56.982045   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:36:56.982057   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:36:57.092250   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:36:57.092288   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:36:57.157548   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:36:57.157582   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:36:57.216511   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:36:57.216563   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 01:36:57.230210   66919 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 01:36:57.230256   66919 out.go:239] * 
	W0815 01:36:57.230316   66919 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.230347   66919 out.go:239] * 
	W0815 01:36:57.231157   66919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:36:57.234003   66919 out.go:177] 
	W0815 01:36:57.235088   66919 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.235127   66919 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 01:36:57.235146   66919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 01:36:57.236647   66919 out.go:177] 
	
	
	==> CRI-O <==
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.181336138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686252181313092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15f6a315-035a-4297-9f5b-52b0283de418 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.181870705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0dd048d-f470-4a70-af87-2e1732c7242d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.181942107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0dd048d-f470-4a70-af87-2e1732c7242d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.182140962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dbba9667928e998c2a6815b23e55cd7f19614c817baa75eb5a7fa90b74bf8fb,PodSandboxId:5c7008c348c981b8763bcce7014b8e72fe463b3fc71862b86b18640c9543ab98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685700842744173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpggv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ef2a4b-a502-452d-a3bd-df1209ff247b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c2ab89a084236599ade963c094c43b3745cdd87df29638978ec4cf68957944,PodSandboxId:dee5eaae9cbd5f8a6eafba097553b303e1cca6c9aa3d81dba2a63bef2d105a59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685700818266451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4cf6d02-281f-4fb5-9ff7-c36143d3af58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f535bcfc2a08c0ab6b5aeada0fa617c10da62116b4e6d37d601e7a97d18809,PodSandboxId:3e54b8667374b243940f10a001097777e7529e107fc377729ccc2509d54be696,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685700053594174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t77b6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcdf11ef-28a6-428c-b033-e29b51af8f0e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc1478a68861312c3eacec272d52a11124ec054eb7b45546bb5f14f89765a7,PodSandboxId:e42f3999b768805fd19ff1b4cdbb819147972df9724fee70ee2cf6152101e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685699889435196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-srq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9520ab8-24d6-410d-bcba-b59e91e817a9
,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad0ed3214dba2d76fd07d6e4f7e064c62164b9d0fb194310d402ca42645d018,PodSandboxId:2724a4b97b2c71cadc08736d7b3584e4c160d7c9f8f91615d5d322ccd219a174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685688545498293,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581ca8baf5f892066c4d1398ac6249c2306a4fb271e16df19126993e37f0a8c0,PodSandboxId:ed4b9c791d8001822698e0f53309ed7c7cf5617989525033958bc9d5cd4f2fa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685688609158451,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9abe2f26e4b74b3ad848d6c1c0015a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62400e7ad56261cc5a4b278617b4f2707f9b28fcb877ff9c8d215aa10030dea4,PodSandboxId:0eb77cab445568c43765c0c932600e85b9fb84d989e30fb00d4e5245e43dd6d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685688601004368,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c0f929f550e2126a4510bc015889c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7b4ee82b9e619b75aa4a1345513619e5cb870d25e0fa3995118c4e585f425d,PodSandboxId:4ff603c7525170ae77c5b4aa9130dd477747bf6d38b0c3dd928638dd35e2cd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685688547601151,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b8187b7ca4df4fe0b938492f06768c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b854d9fa0003c8e3fe7a1437d6f19611f461fe908b1c82cd65f87158173785,PodSandboxId:fe23604f5c8575a4e645973c6bb989b7a45b12ce694025c224cf6882438874ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685407987691766,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0dd048d-f470-4a70-af87-2e1732c7242d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.219386248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34ec0c6a-0dae-47de-98e3-c4dafa53783c name=/runtime.v1.RuntimeService/Version
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.219477520Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34ec0c6a-0dae-47de-98e3-c4dafa53783c name=/runtime.v1.RuntimeService/Version
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.220401340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4c7b07d-b471-46b2-9942-b220e70b0809 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.220808724Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686252220782382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4c7b07d-b471-46b2-9942-b220e70b0809 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.221320805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c85f90e-267d-4d1c-9c16-fdc48bf08149 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.221380215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c85f90e-267d-4d1c-9c16-fdc48bf08149 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.221593616Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dbba9667928e998c2a6815b23e55cd7f19614c817baa75eb5a7fa90b74bf8fb,PodSandboxId:5c7008c348c981b8763bcce7014b8e72fe463b3fc71862b86b18640c9543ab98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685700842744173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpggv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ef2a4b-a502-452d-a3bd-df1209ff247b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c2ab89a084236599ade963c094c43b3745cdd87df29638978ec4cf68957944,PodSandboxId:dee5eaae9cbd5f8a6eafba097553b303e1cca6c9aa3d81dba2a63bef2d105a59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685700818266451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4cf6d02-281f-4fb5-9ff7-c36143d3af58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f535bcfc2a08c0ab6b5aeada0fa617c10da62116b4e6d37d601e7a97d18809,PodSandboxId:3e54b8667374b243940f10a001097777e7529e107fc377729ccc2509d54be696,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685700053594174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t77b6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcdf11ef-28a6-428c-b033-e29b51af8f0e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc1478a68861312c3eacec272d52a11124ec054eb7b45546bb5f14f89765a7,PodSandboxId:e42f3999b768805fd19ff1b4cdbb819147972df9724fee70ee2cf6152101e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685699889435196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-srq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9520ab8-24d6-410d-bcba-b59e91e817a9
,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad0ed3214dba2d76fd07d6e4f7e064c62164b9d0fb194310d402ca42645d018,PodSandboxId:2724a4b97b2c71cadc08736d7b3584e4c160d7c9f8f91615d5d322ccd219a174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685688545498293,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581ca8baf5f892066c4d1398ac6249c2306a4fb271e16df19126993e37f0a8c0,PodSandboxId:ed4b9c791d8001822698e0f53309ed7c7cf5617989525033958bc9d5cd4f2fa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685688609158451,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9abe2f26e4b74b3ad848d6c1c0015a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62400e7ad56261cc5a4b278617b4f2707f9b28fcb877ff9c8d215aa10030dea4,PodSandboxId:0eb77cab445568c43765c0c932600e85b9fb84d989e30fb00d4e5245e43dd6d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685688601004368,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c0f929f550e2126a4510bc015889c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7b4ee82b9e619b75aa4a1345513619e5cb870d25e0fa3995118c4e585f425d,PodSandboxId:4ff603c7525170ae77c5b4aa9130dd477747bf6d38b0c3dd928638dd35e2cd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685688547601151,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b8187b7ca4df4fe0b938492f06768c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b854d9fa0003c8e3fe7a1437d6f19611f461fe908b1c82cd65f87158173785,PodSandboxId:fe23604f5c8575a4e645973c6bb989b7a45b12ce694025c224cf6882438874ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685407987691766,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c85f90e-267d-4d1c-9c16-fdc48bf08149 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.258349876Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb60002b-f159-4af9-86d5-79e0225b1190 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.258427203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb60002b-f159-4af9-86d5-79e0225b1190 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.259481153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b0cfb4a-f0c1-47a5-be90-f2b32b225582 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.259943788Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686252259914577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b0cfb4a-f0c1-47a5-be90-f2b32b225582 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.260620560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19265ac2-29cb-461b-9704-000667929b3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.260678911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19265ac2-29cb-461b-9704-000667929b3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.260937583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dbba9667928e998c2a6815b23e55cd7f19614c817baa75eb5a7fa90b74bf8fb,PodSandboxId:5c7008c348c981b8763bcce7014b8e72fe463b3fc71862b86b18640c9543ab98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685700842744173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpggv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ef2a4b-a502-452d-a3bd-df1209ff247b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c2ab89a084236599ade963c094c43b3745cdd87df29638978ec4cf68957944,PodSandboxId:dee5eaae9cbd5f8a6eafba097553b303e1cca6c9aa3d81dba2a63bef2d105a59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685700818266451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4cf6d02-281f-4fb5-9ff7-c36143d3af58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f535bcfc2a08c0ab6b5aeada0fa617c10da62116b4e6d37d601e7a97d18809,PodSandboxId:3e54b8667374b243940f10a001097777e7529e107fc377729ccc2509d54be696,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685700053594174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t77b6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcdf11ef-28a6-428c-b033-e29b51af8f0e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc1478a68861312c3eacec272d52a11124ec054eb7b45546bb5f14f89765a7,PodSandboxId:e42f3999b768805fd19ff1b4cdbb819147972df9724fee70ee2cf6152101e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685699889435196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-srq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9520ab8-24d6-410d-bcba-b59e91e817a9
,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad0ed3214dba2d76fd07d6e4f7e064c62164b9d0fb194310d402ca42645d018,PodSandboxId:2724a4b97b2c71cadc08736d7b3584e4c160d7c9f8f91615d5d322ccd219a174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685688545498293,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581ca8baf5f892066c4d1398ac6249c2306a4fb271e16df19126993e37f0a8c0,PodSandboxId:ed4b9c791d8001822698e0f53309ed7c7cf5617989525033958bc9d5cd4f2fa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685688609158451,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9abe2f26e4b74b3ad848d6c1c0015a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62400e7ad56261cc5a4b278617b4f2707f9b28fcb877ff9c8d215aa10030dea4,PodSandboxId:0eb77cab445568c43765c0c932600e85b9fb84d989e30fb00d4e5245e43dd6d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685688601004368,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c0f929f550e2126a4510bc015889c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7b4ee82b9e619b75aa4a1345513619e5cb870d25e0fa3995118c4e585f425d,PodSandboxId:4ff603c7525170ae77c5b4aa9130dd477747bf6d38b0c3dd928638dd35e2cd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685688547601151,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b8187b7ca4df4fe0b938492f06768c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b854d9fa0003c8e3fe7a1437d6f19611f461fe908b1c82cd65f87158173785,PodSandboxId:fe23604f5c8575a4e645973c6bb989b7a45b12ce694025c224cf6882438874ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685407987691766,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19265ac2-29cb-461b-9704-000667929b3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.293104608Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a341f88f-b149-4629-9d40-b84e51a14707 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.293184477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a341f88f-b149-4629-9d40-b84e51a14707 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.294257753Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fb93bdc-025f-486a-91f8-0b27cf743e1d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.294581123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686252294561811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fb93bdc-025f-486a-91f8-0b27cf743e1d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.295064458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c07aa54c-75ab-4646-87ce-fb31b425af25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.295151362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c07aa54c-75ab-4646-87ce-fb31b425af25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:44:12 no-preload-884893 crio[725]: time="2024-08-15 01:44:12.295391047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dbba9667928e998c2a6815b23e55cd7f19614c817baa75eb5a7fa90b74bf8fb,PodSandboxId:5c7008c348c981b8763bcce7014b8e72fe463b3fc71862b86b18640c9543ab98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685700842744173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpggv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ef2a4b-a502-452d-a3bd-df1209ff247b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c2ab89a084236599ade963c094c43b3745cdd87df29638978ec4cf68957944,PodSandboxId:dee5eaae9cbd5f8a6eafba097553b303e1cca6c9aa3d81dba2a63bef2d105a59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685700818266451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4cf6d02-281f-4fb5-9ff7-c36143d3af58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f535bcfc2a08c0ab6b5aeada0fa617c10da62116b4e6d37d601e7a97d18809,PodSandboxId:3e54b8667374b243940f10a001097777e7529e107fc377729ccc2509d54be696,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685700053594174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t77b6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcdf11ef-28a6-428c-b033-e29b51af8f0e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc1478a68861312c3eacec272d52a11124ec054eb7b45546bb5f14f89765a7,PodSandboxId:e42f3999b768805fd19ff1b4cdbb819147972df9724fee70ee2cf6152101e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685699889435196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-srq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9520ab8-24d6-410d-bcba-b59e91e817a9
,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad0ed3214dba2d76fd07d6e4f7e064c62164b9d0fb194310d402ca42645d018,PodSandboxId:2724a4b97b2c71cadc08736d7b3584e4c160d7c9f8f91615d5d322ccd219a174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685688545498293,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581ca8baf5f892066c4d1398ac6249c2306a4fb271e16df19126993e37f0a8c0,PodSandboxId:ed4b9c791d8001822698e0f53309ed7c7cf5617989525033958bc9d5cd4f2fa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685688609158451,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9abe2f26e4b74b3ad848d6c1c0015a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62400e7ad56261cc5a4b278617b4f2707f9b28fcb877ff9c8d215aa10030dea4,PodSandboxId:0eb77cab445568c43765c0c932600e85b9fb84d989e30fb00d4e5245e43dd6d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685688601004368,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c0f929f550e2126a4510bc015889c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7b4ee82b9e619b75aa4a1345513619e5cb870d25e0fa3995118c4e585f425d,PodSandboxId:4ff603c7525170ae77c5b4aa9130dd477747bf6d38b0c3dd928638dd35e2cd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685688547601151,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b8187b7ca4df4fe0b938492f06768c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b854d9fa0003c8e3fe7a1437d6f19611f461fe908b1c82cd65f87158173785,PodSandboxId:fe23604f5c8575a4e645973c6bb989b7a45b12ce694025c224cf6882438874ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685407987691766,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c07aa54c-75ab-4646-87ce-fb31b425af25 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4dbba9667928e       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   5c7008c348c98       kube-proxy-dpggv
	57c2ab89a0842       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   dee5eaae9cbd5       storage-provisioner
	f4f535bcfc2a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3e54b8667374b       coredns-6f6b679f8f-t77b6
	79fc1478a6886       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   e42f3999b7688       coredns-6f6b679f8f-srq48
	581ca8baf5f89       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   ed4b9c791d800       etcd-no-preload-884893
	62400e7ad5626       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   0eb77cab44556       kube-controller-manager-no-preload-884893
	7c7b4ee82b9e6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   4ff603c752517       kube-scheduler-no-preload-884893
	3ad0ed3214dba       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   2724a4b97b2c7       kube-apiserver-no-preload-884893
	49b854d9fa000       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   fe23604f5c857       kube-apiserver-no-preload-884893
	
	
	==> coredns [79fc1478a68861312c3eacec272d52a11124ec054eb7b45546bb5f14f89765a7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f4f535bcfc2a08c0ab6b5aeada0fa617c10da62116b4e6d37d601e7a97d18809] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-884893
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-884893
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=no-preload-884893
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T01_34_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 01:34:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-884893
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:44:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 01:40:10 +0000   Thu, 15 Aug 2024 01:34:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 01:40:10 +0000   Thu, 15 Aug 2024 01:34:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 01:40:10 +0000   Thu, 15 Aug 2024 01:34:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 01:40:10 +0000   Thu, 15 Aug 2024 01:34:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.166
	  Hostname:    no-preload-884893
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b85121e7c83470e9872f0b2990e5486
	  System UUID:                0b85121e-7c83-470e-9872-f0b2990e5486
	  Boot ID:                    edd7858c-2fa1-497f-b295-6f7fd2f899e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-srq48                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 coredns-6f6b679f8f-t77b6                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 etcd-no-preload-884893                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-884893             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-no-preload-884893    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-dpggv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 kube-scheduler-no-preload-884893             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-6867b74b74-w47b2              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m12s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m25s)  kubelet          Node no-preload-884893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m25s)  kubelet          Node no-preload-884893 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m25s)  kubelet          Node no-preload-884893 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node no-preload-884893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node no-preload-884893 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node no-preload-884893 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s                  node-controller  Node no-preload-884893 event: Registered Node no-preload-884893 in Controller
	
	
	==> dmesg <==
	[  +0.052228] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039079] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.839275] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.854598] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.527464] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.624334] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.055295] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056479] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.197619] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.129485] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.284437] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[Aug15 01:30] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +0.064506] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.782167] systemd-fstab-generator[1434]: Ignoring "noauto" option for root device
	[  +5.594688] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.801928] kauditd_printk_skb: 85 callbacks suppressed
	[Aug15 01:34] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.813940] systemd-fstab-generator[3061]: Ignoring "noauto" option for root device
	[  +4.561994] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.481705] systemd-fstab-generator[3381]: Ignoring "noauto" option for root device
	[  +5.863050] systemd-fstab-generator[3504]: Ignoring "noauto" option for root device
	[  +0.099771] kauditd_printk_skb: 14 callbacks suppressed
	[Aug15 01:35] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [581ca8baf5f892066c4d1398ac6249c2306a4fb271e16df19126993e37f0a8c0] <==
	{"level":"info","ts":"2024-08-15T01:34:49.021775Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T01:34:49.022030Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e532d532ae69e491","initial-advertise-peer-urls":["https://192.168.61.166:2380"],"listen-peer-urls":["https://192.168.61.166:2380"],"advertise-client-urls":["https://192.168.61.166:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.166:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T01:34:49.022077Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T01:34:49.022168Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.166:2380"}
	{"level":"info","ts":"2024-08-15T01:34:49.022201Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.166:2380"}
	{"level":"info","ts":"2024-08-15T01:34:49.140796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e532d532ae69e491 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-15T01:34:49.140927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e532d532ae69e491 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-15T01:34:49.140974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e532d532ae69e491 received MsgPreVoteResp from e532d532ae69e491 at term 1"}
	{"level":"info","ts":"2024-08-15T01:34:49.141019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e532d532ae69e491 became candidate at term 2"}
	{"level":"info","ts":"2024-08-15T01:34:49.141046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e532d532ae69e491 received MsgVoteResp from e532d532ae69e491 at term 2"}
	{"level":"info","ts":"2024-08-15T01:34:49.141075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e532d532ae69e491 became leader at term 2"}
	{"level":"info","ts":"2024-08-15T01:34:49.141103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e532d532ae69e491 elected leader e532d532ae69e491 at term 2"}
	{"level":"info","ts":"2024-08-15T01:34:49.145892Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:34:49.148001Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e532d532ae69e491","local-member-attributes":"{Name:no-preload-884893 ClientURLs:[https://192.168.61.166:2379]}","request-path":"/0/members/e532d532ae69e491/attributes","cluster-id":"f878173fc0af8a15","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T01:34:49.148228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:34:49.148560Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:34:49.149626Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f878173fc0af8a15","local-member-id":"e532d532ae69e491","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:34:49.159673Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:34:49.151801Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T01:34:49.152407Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:34:49.159264Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:34:49.163827Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:34:49.163873Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T01:34:49.164642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T01:34:49.167466Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.166:2379"}
	
	
	==> kernel <==
	 01:44:12 up 14 min,  0 users,  load average: 0.13, 0.18, 0.14
	Linux no-preload-884893 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3ad0ed3214dba2d76fd07d6e4f7e064c62164b9d0fb194310d402ca42645d018] <==
	W0815 01:39:52.241561       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:39:52.241611       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:39:52.242705       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:39:52.242822       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:40:52.243116       1 handler_proxy.go:99] no RequestInfo found in the context
	W0815 01:40:52.243131       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:40:52.243604       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0815 01:40:52.243682       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0815 01:40:52.244906       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:40:52.244916       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:42:52.245785       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:42:52.246228       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0815 01:42:52.246392       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:42:52.246580       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:42:52.247487       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:42:52.248663       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [49b854d9fa0003c8e3fe7a1437d6f19611f461fe908b1c82cd65f87158173785] <==
	W0815 01:34:43.888297       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:43.894874       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:43.944354       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:43.954459       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:43.984349       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:43.992907       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.009578       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.027082       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.044700       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.059074       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.090170       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.097783       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.101078       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.117997       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.118067       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.121431       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.144296       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.156652       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.194103       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.202442       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.202674       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.209977       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.242434       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.394129       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.640008       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [62400e7ad56261cc5a4b278617b4f2707f9b28fcb877ff9c8d215aa10030dea4] <==
	E0815 01:38:58.238598       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:38:58.672173       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:39:28.244407       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:39:28.679494       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:39:58.251221       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:39:58.689521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:40:10.489691       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-884893"
	E0815 01:40:28.257993       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:40:28.697823       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:40:58.265606       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:40:58.705554       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:40:58.740064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="218.256µs"
	I0815 01:41:10.738650       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="108.194µs"
	E0815 01:41:28.272395       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:41:28.713971       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:41:58.279620       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:41:58.721626       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:42:28.287343       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:42:28.729607       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:42:58.294830       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:42:58.738885       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:43:28.301168       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:43:28.746493       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:43:58.307119       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:43:58.754791       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4dbba9667928e998c2a6815b23e55cd7f19614c817baa75eb5a7fa90b74bf8fb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:35:01.148101       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 01:35:01.161834       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.166"]
	E0815 01:35:01.161931       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 01:35:01.218897       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 01:35:01.218940       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 01:35:01.218968       1 server_linux.go:169] "Using iptables Proxier"
	I0815 01:35:01.223202       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 01:35:01.223539       1 server.go:483] "Version info" version="v1.31.0"
	I0815 01:35:01.223565       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:35:01.225092       1 config.go:197] "Starting service config controller"
	I0815 01:35:01.225142       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 01:35:01.225179       1 config.go:104] "Starting endpoint slice config controller"
	I0815 01:35:01.225183       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 01:35:01.227499       1 config.go:326] "Starting node config controller"
	I0815 01:35:01.227568       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 01:35:01.325403       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 01:35:01.325521       1 shared_informer.go:320] Caches are synced for service config
	I0815 01:35:01.328010       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7c7b4ee82b9e619b75aa4a1345513619e5cb870d25e0fa3995118c4e585f425d] <==
	W0815 01:34:52.159925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 01:34:52.160034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.187533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 01:34:52.187954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.204939       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 01:34:52.205010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.312789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 01:34:52.312928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.361340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 01:34:52.361382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.368831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 01:34:52.368948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.403441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 01:34:52.403662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.515628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 01:34:52.516123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.549321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 01:34:52.549497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.556568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 01:34:52.556622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.589771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 01:34:52.589819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.824918       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 01:34:52.824980       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 01:34:54.566269       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 01:42:55 no-preload-884893 kubelet[3388]: E0815 01:42:55.727203    3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w47b2" podUID="7423be62-ae01-4b3f-9e24-049f4788f32f"
	Aug 15 01:43:03 no-preload-884893 kubelet[3388]: E0815 01:43:03.865396    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686183865084950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:03 no-preload-884893 kubelet[3388]: E0815 01:43:03.865664    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686183865084950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:06 no-preload-884893 kubelet[3388]: E0815 01:43:06.725548    3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w47b2" podUID="7423be62-ae01-4b3f-9e24-049f4788f32f"
	Aug 15 01:43:13 no-preload-884893 kubelet[3388]: E0815 01:43:13.867867    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686193867286120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:13 no-preload-884893 kubelet[3388]: E0815 01:43:13.868209    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686193867286120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:21 no-preload-884893 kubelet[3388]: E0815 01:43:21.725671    3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w47b2" podUID="7423be62-ae01-4b3f-9e24-049f4788f32f"
	Aug 15 01:43:23 no-preload-884893 kubelet[3388]: E0815 01:43:23.869691    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686203869402560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:23 no-preload-884893 kubelet[3388]: E0815 01:43:23.869757    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686203869402560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:33 no-preload-884893 kubelet[3388]: E0815 01:43:33.871903    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686213871540194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:33 no-preload-884893 kubelet[3388]: E0815 01:43:33.871932    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686213871540194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:34 no-preload-884893 kubelet[3388]: E0815 01:43:34.724671    3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w47b2" podUID="7423be62-ae01-4b3f-9e24-049f4788f32f"
	Aug 15 01:43:43 no-preload-884893 kubelet[3388]: E0815 01:43:43.873879    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686223873551524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:43 no-preload-884893 kubelet[3388]: E0815 01:43:43.873950    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686223873551524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:49 no-preload-884893 kubelet[3388]: E0815 01:43:49.726037    3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w47b2" podUID="7423be62-ae01-4b3f-9e24-049f4788f32f"
	Aug 15 01:43:53 no-preload-884893 kubelet[3388]: E0815 01:43:53.735383    3388 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 01:43:53 no-preload-884893 kubelet[3388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 01:43:53 no-preload-884893 kubelet[3388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 01:43:53 no-preload-884893 kubelet[3388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 01:43:53 no-preload-884893 kubelet[3388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 01:43:53 no-preload-884893 kubelet[3388]: E0815 01:43:53.875852    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686233875401826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:43:53 no-preload-884893 kubelet[3388]: E0815 01:43:53.875890    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686233875401826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:44:01 no-preload-884893 kubelet[3388]: E0815 01:44:01.724519    3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w47b2" podUID="7423be62-ae01-4b3f-9e24-049f4788f32f"
	Aug 15 01:44:03 no-preload-884893 kubelet[3388]: E0815 01:44:03.877605    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686243877310168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:44:03 no-preload-884893 kubelet[3388]: E0815 01:44:03.877925    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686243877310168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [57c2ab89a084236599ade963c094c43b3745cdd87df29638978ec4cf68957944] <==
	I0815 01:35:01.030677       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 01:35:01.054167       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 01:35:01.054489       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 01:35:01.066838       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 01:35:01.068593       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-884893_0e27ce53-20bd-4b85-82c2-b055aaa97022!
	I0815 01:35:01.068694       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e77a760b-ddfd-47db-860c-05aaa5af85a2", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-884893_0e27ce53-20bd-4b85-82c2-b055aaa97022 became leader
	I0815 01:35:01.168877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-884893_0e27ce53-20bd-4b85-82c2-b055aaa97022!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-884893 -n no-preload-884893
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-884893 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-w47b2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-884893 describe pod metrics-server-6867b74b74-w47b2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-884893 describe pod metrics-server-6867b74b74-w47b2: exit status 1 (64.908344ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-w47b2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-884893 describe pod metrics-server-6867b74b74-w47b2: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
E0815 01:38:45.641283   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
E0815 01:39:41.523448   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
E0815 01:41:48.712780   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
E0815 01:43:45.640959   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
E0815 01:44:41.523282   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-390782 -n old-k8s-version-390782
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 2 (223.698857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-390782" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 2 (215.323574ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-390782 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-390782 logs -n 25: (1.531154845s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:20 UTC |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-884893             | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-294760 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | disable-driver-mounts-294760                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:23 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-190398            | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-390782        | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018537  | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-884893                  | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-190398                 | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-390782             | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018537       | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:26 UTC | 15 Aug 24 01:34 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:26:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:26:05.128952   67451 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:26:05.129201   67451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:26:05.129210   67451 out.go:304] Setting ErrFile to fd 2...
	I0815 01:26:05.129214   67451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:26:05.129371   67451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:26:05.129877   67451 out.go:298] Setting JSON to false
	I0815 01:26:05.130775   67451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7710,"bootTime":1723677455,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:26:05.130828   67451 start.go:139] virtualization: kvm guest
	I0815 01:26:05.133200   67451 out.go:177] * [default-k8s-diff-port-018537] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:26:05.134520   67451 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:26:05.134534   67451 notify.go:220] Checking for updates...
	I0815 01:26:05.136725   67451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:26:05.137871   67451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:26:05.138973   67451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:26:05.140126   67451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:26:05.141168   67451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:26:05.142477   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:26:05.142872   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:26:05.142931   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:26:05.157398   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0815 01:26:05.157792   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:26:05.158237   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:26:05.158271   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:26:05.158625   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:26:05.158791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:26:05.158998   67451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:26:05.159268   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:26:05.159298   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:26:05.173332   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0815 01:26:05.173671   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:26:05.174063   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:26:05.174085   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:26:05.174378   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:26:05.174558   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:26:05.209931   67451 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 01:26:04.417005   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:05.210993   67451 start.go:297] selected driver: kvm2
	I0815 01:26:05.211005   67451 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:05.211106   67451 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:26:05.211778   67451 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:26:05.211854   67451 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:26:05.226770   67451 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:26:05.227141   67451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:26:05.227174   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:26:05.227182   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:26:05.227228   67451 start.go:340] cluster config:
	{Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:05.227335   67451 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:26:05.228866   67451 out.go:177] * Starting "default-k8s-diff-port-018537" primary control-plane node in "default-k8s-diff-port-018537" cluster
	I0815 01:26:05.229784   67451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:26:05.229818   67451 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:26:05.229826   67451 cache.go:56] Caching tarball of preloaded images
	I0815 01:26:05.229905   67451 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:26:05.229916   67451 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:26:05.230017   67451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:26:05.230223   67451 start.go:360] acquireMachinesLock for default-k8s-diff-port-018537: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:26:07.488887   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:13.568939   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:16.640954   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:22.720929   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:25.792889   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:31.872926   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:34.944895   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:41.024886   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:44.096913   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:50.176957   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:53.249017   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:59.328928   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:02.400891   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:08.480935   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:11.552904   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:17.632939   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:20.704876   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:26.784922   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:29.856958   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:35.936895   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:39.008957   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:45.088962   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:48.160964   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:54.240971   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:57.312935   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:03.393014   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:06.464973   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:12.544928   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:15.616915   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:21.696904   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:24.768924   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:27.773197   66919 start.go:364] duration metric: took 3m57.538488178s to acquireMachinesLock for "old-k8s-version-390782"
	I0815 01:28:27.773249   66919 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:27.773269   66919 fix.go:54] fixHost starting: 
	I0815 01:28:27.773597   66919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:27.773632   66919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:27.788757   66919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0815 01:28:27.789155   66919 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:27.789612   66919 main.go:141] libmachine: Using API Version  1
	I0815 01:28:27.789645   66919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:27.789952   66919 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:27.790122   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:27.790265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetState
	I0815 01:28:27.791742   66919 fix.go:112] recreateIfNeeded on old-k8s-version-390782: state=Stopped err=<nil>
	I0815 01:28:27.791773   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	W0815 01:28:27.791930   66919 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:27.793654   66919 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-390782" ...
	I0815 01:28:27.794650   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .Start
	I0815 01:28:27.794798   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring networks are active...
	I0815 01:28:27.795554   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network default is active
	I0815 01:28:27.795835   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network mk-old-k8s-version-390782 is active
	I0815 01:28:27.796194   66919 main.go:141] libmachine: (old-k8s-version-390782) Getting domain xml...
	I0815 01:28:27.797069   66919 main.go:141] libmachine: (old-k8s-version-390782) Creating domain...
	I0815 01:28:28.999562   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting to get IP...
	I0815 01:28:29.000288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.000697   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.000787   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.000698   67979 retry.go:31] will retry after 209.337031ms: waiting for machine to come up
	I0815 01:28:29.212345   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.212839   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.212865   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.212796   67979 retry.go:31] will retry after 252.542067ms: waiting for machine to come up
	I0815 01:28:29.467274   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.467659   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.467685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.467607   67979 retry.go:31] will retry after 412.932146ms: waiting for machine to come up
	I0815 01:28:29.882217   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.882643   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.882672   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.882601   67979 retry.go:31] will retry after 526.991017ms: waiting for machine to come up
	I0815 01:28:27.770766   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:27.770800   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:28:27.771142   66492 buildroot.go:166] provisioning hostname "no-preload-884893"
	I0815 01:28:27.771173   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:28:27.771381   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:28:27.773059   66492 machine.go:97] duration metric: took 4m37.432079731s to provisionDockerMachine
	I0815 01:28:27.773102   66492 fix.go:56] duration metric: took 4m37.453608342s for fixHost
	I0815 01:28:27.773107   66492 start.go:83] releasing machines lock for "no-preload-884893", held for 4m37.453640668s
	W0815 01:28:27.773125   66492 start.go:714] error starting host: provision: host is not running
	W0815 01:28:27.773209   66492 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0815 01:28:27.773219   66492 start.go:729] Will try again in 5 seconds ...
	I0815 01:28:30.411443   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:30.411819   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:30.411881   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:30.411794   67979 retry.go:31] will retry after 758.953861ms: waiting for machine to come up
	I0815 01:28:31.172721   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.173099   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.173131   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.173045   67979 retry.go:31] will retry after 607.740613ms: waiting for machine to come up
	I0815 01:28:31.782922   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.783406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.783434   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.783343   67979 retry.go:31] will retry after 738.160606ms: waiting for machine to come up
	I0815 01:28:32.523257   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:32.523685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:32.523716   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:32.523625   67979 retry.go:31] will retry after 904.54249ms: waiting for machine to come up
	I0815 01:28:33.430286   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:33.430690   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:33.430722   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:33.430637   67979 retry.go:31] will retry after 1.55058959s: waiting for machine to come up
	I0815 01:28:34.983386   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:34.983838   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:34.983870   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:34.983788   67979 retry.go:31] will retry after 1.636768205s: waiting for machine to come up
	I0815 01:28:32.775084   66492 start.go:360] acquireMachinesLock for no-preload-884893: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:28:36.622595   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:36.623058   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:36.623083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:36.622994   67979 retry.go:31] will retry after 1.777197126s: waiting for machine to come up
	I0815 01:28:38.401812   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:38.402289   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:38.402319   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:38.402247   67979 retry.go:31] will retry after 3.186960364s: waiting for machine to come up
	I0815 01:28:41.592635   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:41.593067   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:41.593093   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:41.593018   67979 retry.go:31] will retry after 3.613524245s: waiting for machine to come up
	I0815 01:28:46.469326   67000 start.go:364] duration metric: took 4m10.840663216s to acquireMachinesLock for "embed-certs-190398"
	I0815 01:28:46.469405   67000 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:46.469425   67000 fix.go:54] fixHost starting: 
	I0815 01:28:46.469913   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:46.469951   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:46.486446   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0815 01:28:46.486871   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:46.487456   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:28:46.487491   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:46.487832   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:46.488037   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:28:46.488198   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:28:46.489804   67000 fix.go:112] recreateIfNeeded on embed-certs-190398: state=Stopped err=<nil>
	I0815 01:28:46.489863   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	W0815 01:28:46.490033   67000 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:46.492240   67000 out.go:177] * Restarting existing kvm2 VM for "embed-certs-190398" ...
	I0815 01:28:45.209122   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.209617   66919 main.go:141] libmachine: (old-k8s-version-390782) Found IP for machine: 192.168.50.21
	I0815 01:28:45.209639   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserving static IP address...
	I0815 01:28:45.209657   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has current primary IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.210115   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserved static IP address: 192.168.50.21
	I0815 01:28:45.210138   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting for SSH to be available...
	I0815 01:28:45.210160   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.210188   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | skip adding static IP to network mk-old-k8s-version-390782 - found existing host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"}
	I0815 01:28:45.210204   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Getting to WaitForSSH function...
	I0815 01:28:45.212727   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213127   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.213153   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213307   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH client type: external
	I0815 01:28:45.213354   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa (-rw-------)
	I0815 01:28:45.213388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:28:45.213406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | About to run SSH command:
	I0815 01:28:45.213437   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | exit 0
	I0815 01:28:45.340616   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | SSH cmd err, output: <nil>: 
	I0815 01:28:45.341118   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetConfigRaw
	I0815 01:28:45.341848   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.344534   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.344934   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.344967   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.345196   66919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/config.json ...
	I0815 01:28:45.345414   66919 machine.go:94] provisionDockerMachine start ...
	I0815 01:28:45.345433   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:45.345699   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.347935   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348249   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.348278   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348438   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.348609   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348797   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348957   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.349117   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.349324   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.349337   66919 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:28:45.456668   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:28:45.456701   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.456959   66919 buildroot.go:166] provisioning hostname "old-k8s-version-390782"
	I0815 01:28:45.456987   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.457148   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.460083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460425   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.460453   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460613   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.460783   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.460924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.461039   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.461180   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.461392   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.461416   66919 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-390782 && echo "old-k8s-version-390782" | sudo tee /etc/hostname
	I0815 01:28:45.582108   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-390782
	
	I0815 01:28:45.582136   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.585173   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585556   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.585590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585795   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.585989   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586131   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586253   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.586445   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.586648   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.586667   66919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-390782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-390782/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-390782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:28:45.700737   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:45.700778   66919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:28:45.700802   66919 buildroot.go:174] setting up certificates
	I0815 01:28:45.700812   66919 provision.go:84] configureAuth start
	I0815 01:28:45.700821   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.701079   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.704006   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704384   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.704416   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704593   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.706737   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707018   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.707041   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707213   66919 provision.go:143] copyHostCerts
	I0815 01:28:45.707299   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:28:45.707324   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:28:45.707408   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:28:45.707528   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:28:45.707537   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:28:45.707576   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:28:45.707657   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:28:45.707666   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:28:45.707701   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:28:45.707771   66919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-390782 san=[127.0.0.1 192.168.50.21 localhost minikube old-k8s-version-390782]
	I0815 01:28:45.787190   66919 provision.go:177] copyRemoteCerts
	I0815 01:28:45.787256   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:28:45.787287   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.790159   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790542   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.790590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790735   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.790924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.791097   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.791217   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:45.874561   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:28:45.897869   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 01:28:45.923862   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:28:45.950038   66919 provision.go:87] duration metric: took 249.211016ms to configureAuth
	I0815 01:28:45.950065   66919 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:28:45.950301   66919 config.go:182] Loaded profile config "old-k8s-version-390782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:28:45.950412   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.953288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953746   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.953778   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953902   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.954098   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954358   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954569   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.954784   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.954953   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.954967   66919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:28:46.228321   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:28:46.228349   66919 machine.go:97] duration metric: took 882.921736ms to provisionDockerMachine
	I0815 01:28:46.228363   66919 start.go:293] postStartSetup for "old-k8s-version-390782" (driver="kvm2")
	I0815 01:28:46.228375   66919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:28:46.228401   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.228739   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:28:46.228774   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.231605   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.231993   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.232020   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.232216   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.232419   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.232698   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.232919   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.319433   66919 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:28:46.323340   66919 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:28:46.323373   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:28:46.323451   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:28:46.323555   66919 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:28:46.323658   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:28:46.332594   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:46.354889   66919 start.go:296] duration metric: took 126.511194ms for postStartSetup
	I0815 01:28:46.354930   66919 fix.go:56] duration metric: took 18.581671847s for fixHost
	I0815 01:28:46.354950   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.357987   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358251   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.358277   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358509   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.358747   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.358934   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.359092   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.359240   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:46.359425   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:46.359438   66919 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:28:46.469167   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685326.429908383
	
	I0815 01:28:46.469192   66919 fix.go:216] guest clock: 1723685326.429908383
	I0815 01:28:46.469202   66919 fix.go:229] Guest: 2024-08-15 01:28:46.429908383 +0000 UTC Remote: 2024-08-15 01:28:46.354934297 +0000 UTC m=+256.257437765 (delta=74.974086ms)
	I0815 01:28:46.469231   66919 fix.go:200] guest clock delta is within tolerance: 74.974086ms
	I0815 01:28:46.469236   66919 start.go:83] releasing machines lock for "old-k8s-version-390782", held for 18.696013068s
	I0815 01:28:46.469264   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.469527   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:46.472630   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473053   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.473082   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473746   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473931   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473998   66919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:28:46.474048   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.474159   66919 ssh_runner.go:195] Run: cat /version.json
	I0815 01:28:46.474188   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.476984   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477012   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477421   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477445   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477465   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477499   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477615   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477719   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477784   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477845   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477907   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477975   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.478048   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.585745   66919 ssh_runner.go:195] Run: systemctl --version
	I0815 01:28:46.592135   66919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:28:46.731888   66919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:28:46.739171   66919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:28:46.739238   66919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:28:46.760211   66919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:28:46.760232   66919 start.go:495] detecting cgroup driver to use...
	I0815 01:28:46.760316   66919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:28:46.778483   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:28:46.791543   66919 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:28:46.791632   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:28:46.804723   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:28:46.818794   66919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:28:46.931242   66919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:28:47.091098   66919 docker.go:233] disabling docker service ...
	I0815 01:28:47.091177   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:28:47.105150   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:28:47.117485   66919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:28:47.236287   66919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:28:47.376334   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:28:47.389397   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:28:47.406551   66919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 01:28:47.406627   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.416736   66919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:28:47.416803   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.427000   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.437833   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.449454   66919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:28:47.460229   66919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:28:47.469737   66919 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:28:47.469800   66919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:28:47.482270   66919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:28:47.491987   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:47.624462   66919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:28:47.759485   66919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:28:47.759546   66919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:28:47.764492   66919 start.go:563] Will wait 60s for crictl version
	I0815 01:28:47.764545   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:47.767890   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:28:47.814241   66919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:28:47.814342   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.842933   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.873241   66919 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 01:28:47.874283   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:47.877389   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.877763   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:47.877793   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.878008   66919 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 01:28:47.881794   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:47.893270   66919 kubeadm.go:883] updating cluster {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:28:47.893412   66919 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 01:28:47.893466   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:47.939402   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:47.939489   66919 ssh_runner.go:195] Run: which lz4
	I0815 01:28:47.943142   66919 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:28:47.947165   66919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:28:47.947191   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 01:28:49.418409   66919 crio.go:462] duration metric: took 1.475291539s to copy over tarball
	I0815 01:28:49.418479   66919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:28:46.493529   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Start
	I0815 01:28:46.493725   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring networks are active...
	I0815 01:28:46.494472   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring network default is active
	I0815 01:28:46.494805   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring network mk-embed-certs-190398 is active
	I0815 01:28:46.495206   67000 main.go:141] libmachine: (embed-certs-190398) Getting domain xml...
	I0815 01:28:46.496037   67000 main.go:141] libmachine: (embed-certs-190398) Creating domain...
	I0815 01:28:47.761636   67000 main.go:141] libmachine: (embed-certs-190398) Waiting to get IP...
	I0815 01:28:47.762736   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:47.763100   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:47.763157   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:47.763070   68098 retry.go:31] will retry after 304.161906ms: waiting for machine to come up
	I0815 01:28:48.068645   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.069177   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.069204   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.069148   68098 retry.go:31] will retry after 275.006558ms: waiting for machine to come up
	I0815 01:28:48.345793   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.346294   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.346331   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.346238   68098 retry.go:31] will retry after 325.359348ms: waiting for machine to come up
	I0815 01:28:48.673903   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.674489   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.674513   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.674447   68098 retry.go:31] will retry after 547.495848ms: waiting for machine to come up
	I0815 01:28:49.223465   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:49.224028   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:49.224062   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:49.223982   68098 retry.go:31] will retry after 471.418796ms: waiting for machine to come up
	I0815 01:28:49.696567   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:49.697064   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:49.697093   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:49.697019   68098 retry.go:31] will retry after 871.173809ms: waiting for machine to come up
	I0815 01:28:52.212767   66919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.794261663s)
	I0815 01:28:52.212795   66919 crio.go:469] duration metric: took 2.794358617s to extract the tarball
	I0815 01:28:52.212803   66919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:28:52.254542   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:52.286548   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:52.286571   66919 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:28:52.286651   66919 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.286675   66919 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 01:28:52.286687   66919 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.286684   66919 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.286704   66919 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.286645   66919 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.286672   66919 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.286649   66919 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288433   66919 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.288441   66919 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.288473   66919 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.288446   66919 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.288429   66919 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.288633   66919 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 01:28:52.526671   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 01:28:52.548397   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.556168   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.560115   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.563338   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.566306   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.576900   66919 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 01:28:52.576955   66919 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 01:28:52.576999   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.579694   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.639727   66919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 01:28:52.639778   66919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.639828   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.697299   66919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 01:28:52.697346   66919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.697397   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.709988   66919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 01:28:52.710026   66919 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 01:28:52.710051   66919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.710072   66919 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.710101   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710109   66919 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 01:28:52.710121   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710128   66919 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.710132   66919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 01:28:52.710146   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.710102   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.710159   66919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.710177   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.710159   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710198   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.768699   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.768764   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.768837   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.768892   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.768933   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.768954   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.800404   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.893131   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.893174   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.893241   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.918186   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.918203   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.918205   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.946507   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:53.037776   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:53.037991   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:53.039379   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:53.077479   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:53.077542   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 01:28:53.077559   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 01:28:53.096763   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 01:28:53.138129   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:53.153330   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 01:28:53.153366   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 01:28:53.153368   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 01:28:53.162469   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 01:28:53.292377   66919 cache_images.go:92] duration metric: took 1.005786902s to LoadCachedImages
	W0815 01:28:53.292485   66919 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0815 01:28:53.292503   66919 kubeadm.go:934] updating node { 192.168.50.21 8443 v1.20.0 crio true true} ...
	I0815 01:28:53.292682   66919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-390782 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:28:53.292781   66919 ssh_runner.go:195] Run: crio config
	I0815 01:28:53.339927   66919 cni.go:84] Creating CNI manager for ""
	I0815 01:28:53.339957   66919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:28:53.339979   66919 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:28:53.340009   66919 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.21 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-390782 NodeName:old-k8s-version-390782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 01:28:53.340183   66919 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-390782"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:28:53.340278   66919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 01:28:53.350016   66919 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:28:53.350117   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:28:53.359379   66919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 01:28:53.375719   66919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:28:53.392054   66919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 01:28:53.409122   66919 ssh_runner.go:195] Run: grep 192.168.50.21	control-plane.minikube.internal$ /etc/hosts
	I0815 01:28:53.412646   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:53.423917   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:53.560712   66919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:28:53.576488   66919 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782 for IP: 192.168.50.21
	I0815 01:28:53.576512   66919 certs.go:194] generating shared ca certs ...
	I0815 01:28:53.576530   66919 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:53.576748   66919 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:28:53.576823   66919 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:28:53.576837   66919 certs.go:256] generating profile certs ...
	I0815 01:28:53.576975   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.key
	I0815 01:28:53.577044   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key.d79afed6
	I0815 01:28:53.577113   66919 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key
	I0815 01:28:53.577274   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:28:53.577323   66919 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:28:53.577337   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:28:53.577369   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:28:53.577400   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:28:53.577431   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:28:53.577529   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:53.578239   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:28:53.622068   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:28:53.648947   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:28:53.681678   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:28:53.719636   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 01:28:53.744500   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:28:53.777941   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:28:53.810631   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:28:53.832906   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:28:53.854487   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:28:53.876448   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:28:53.898487   66919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:28:53.914102   66919 ssh_runner.go:195] Run: openssl version
	I0815 01:28:53.919563   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:28:53.929520   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933730   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933775   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.939056   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:28:53.948749   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:28:53.958451   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962624   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962669   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.967800   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:28:53.977228   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:28:53.986801   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990797   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990842   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.995930   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:28:54.005862   66919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:28:54.010115   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:28:54.015861   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:28:54.021980   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:28:54.028344   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:28:54.034172   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:28:54.040316   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:28:54.046525   66919 kubeadm.go:392] StartCluster: {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:28:54.046624   66919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:28:54.046671   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.086420   66919 cri.go:89] found id: ""
	I0815 01:28:54.086498   66919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:28:54.096425   66919 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:28:54.096449   66919 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:28:54.096500   66919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:28:54.106217   66919 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:28:54.107254   66919 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-390782" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:28:54.107872   66919 kubeconfig.go:62] /home/jenkins/minikube-integration/19443-13088/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-390782" cluster setting kubeconfig missing "old-k8s-version-390782" context setting]
	I0815 01:28:54.109790   66919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:54.140029   66919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:28:54.150180   66919 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.21
	I0815 01:28:54.150237   66919 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:28:54.150251   66919 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:28:54.150308   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.186400   66919 cri.go:89] found id: ""
	I0815 01:28:54.186485   66919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:28:54.203351   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:28:54.212828   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:28:54.212849   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:28:54.212910   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:28:54.221577   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:28:54.221641   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:28:54.230730   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:28:54.239213   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:28:54.239279   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:28:54.248268   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.256909   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:28:54.256968   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.266043   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:28:54.276366   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:28:54.276432   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:28:54.285945   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:28:54.295262   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:54.419237   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.098102   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:50.569917   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:50.570436   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:50.570465   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:50.570394   68098 retry.go:31] will retry after 775.734951ms: waiting for machine to come up
	I0815 01:28:51.347459   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:51.347917   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:51.347944   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:51.347869   68098 retry.go:31] will retry after 1.319265032s: waiting for machine to come up
	I0815 01:28:52.668564   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:52.669049   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:52.669116   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:52.669015   68098 retry.go:31] will retry after 1.765224181s: waiting for machine to come up
	I0815 01:28:54.435556   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:54.436039   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:54.436071   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:54.435975   68098 retry.go:31] will retry after 1.545076635s: waiting for machine to come up
	I0815 01:28:55.318597   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.420419   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.514727   66919 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:28:55.514825   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.015883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.515816   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.015709   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.515895   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.015127   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.515796   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.014975   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.515893   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:00.015918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:55.982693   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:55.983288   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:55.983328   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:55.983112   68098 retry.go:31] will retry after 2.788039245s: waiting for machine to come up
	I0815 01:28:58.773761   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:58.774166   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:58.774194   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:58.774087   68098 retry.go:31] will retry after 2.531335813s: waiting for machine to come up
	I0815 01:29:00.514933   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.015014   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.515780   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.015534   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.515502   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.015539   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.515643   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.015544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.515786   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:05.015882   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.309051   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:01.309593   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:29:01.309634   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:29:01.309552   68098 retry.go:31] will retry after 3.239280403s: waiting for machine to come up
	I0815 01:29:04.552370   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.552978   67000 main.go:141] libmachine: (embed-certs-190398) Found IP for machine: 192.168.72.151
	I0815 01:29:04.553002   67000 main.go:141] libmachine: (embed-certs-190398) Reserving static IP address...
	I0815 01:29:04.553047   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has current primary IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.553427   67000 main.go:141] libmachine: (embed-certs-190398) Reserved static IP address: 192.168.72.151
	I0815 01:29:04.553452   67000 main.go:141] libmachine: (embed-certs-190398) Waiting for SSH to be available...
	I0815 01:29:04.553481   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "embed-certs-190398", mac: "52:54:00:5a:91:1a", ip: "192.168.72.151"} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.553510   67000 main.go:141] libmachine: (embed-certs-190398) DBG | skip adding static IP to network mk-embed-certs-190398 - found existing host DHCP lease matching {name: "embed-certs-190398", mac: "52:54:00:5a:91:1a", ip: "192.168.72.151"}
	I0815 01:29:04.553525   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Getting to WaitForSSH function...
	I0815 01:29:04.555694   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.556036   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.556067   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.556168   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Using SSH client type: external
	I0815 01:29:04.556189   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa (-rw-------)
	I0815 01:29:04.556221   67000 main.go:141] libmachine: (embed-certs-190398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:04.556235   67000 main.go:141] libmachine: (embed-certs-190398) DBG | About to run SSH command:
	I0815 01:29:04.556252   67000 main.go:141] libmachine: (embed-certs-190398) DBG | exit 0
	I0815 01:29:04.680599   67000 main.go:141] libmachine: (embed-certs-190398) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:04.680961   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetConfigRaw
	I0815 01:29:04.681526   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:04.683847   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.684244   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.684270   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.684531   67000 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/config.json ...
	I0815 01:29:04.684755   67000 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:04.684772   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:04.684989   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.687469   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.687823   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.687848   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.687972   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.688135   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.688267   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.688389   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.688525   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.688749   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.688761   67000 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:04.788626   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:04.788670   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:04.788914   67000 buildroot.go:166] provisioning hostname "embed-certs-190398"
	I0815 01:29:04.788940   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:04.789136   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.791721   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.792153   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.792198   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.792398   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.792580   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.792756   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.792861   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.793053   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.793293   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.793312   67000 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-190398 && echo "embed-certs-190398" | sudo tee /etc/hostname
	I0815 01:29:04.910133   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-190398
	
	I0815 01:29:04.910160   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.913241   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.913666   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.913701   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.913887   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.914131   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.914336   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.914491   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.914665   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.914884   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.914909   67000 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-190398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-190398/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-190398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:05.025052   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:05.025089   67000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:05.025115   67000 buildroot.go:174] setting up certificates
	I0815 01:29:05.025127   67000 provision.go:84] configureAuth start
	I0815 01:29:05.025139   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:05.025439   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:05.028224   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.028582   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.028618   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.028753   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.030960   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.031305   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.031335   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.031524   67000 provision.go:143] copyHostCerts
	I0815 01:29:05.031598   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:05.031608   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:05.031663   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:05.031745   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:05.031752   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:05.031773   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:05.031825   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:05.031832   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:05.031849   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:05.031909   67000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.embed-certs-190398 san=[127.0.0.1 192.168.72.151 embed-certs-190398 localhost minikube]
	I0815 01:29:05.246512   67000 provision.go:177] copyRemoteCerts
	I0815 01:29:05.246567   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:05.246590   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.249286   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.249570   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.249609   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.249736   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.249933   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.250109   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.250337   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.330596   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 01:29:05.352611   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:29:05.374001   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:05.394724   67000 provision.go:87] duration metric: took 369.584008ms to configureAuth
	I0815 01:29:05.394750   67000 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:05.394917   67000 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:05.394982   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.397305   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.397620   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.397658   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.397748   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.397924   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.398039   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.398150   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.398297   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:05.398465   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:05.398486   67000 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:05.893255   67451 start.go:364] duration metric: took 3m0.662991861s to acquireMachinesLock for "default-k8s-diff-port-018537"
	I0815 01:29:05.893347   67451 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:29:05.893356   67451 fix.go:54] fixHost starting: 
	I0815 01:29:05.893803   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:05.893846   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:05.910516   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I0815 01:29:05.910882   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:05.911391   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:05.911415   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:05.911748   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:05.911959   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:05.912088   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:05.913672   67451 fix.go:112] recreateIfNeeded on default-k8s-diff-port-018537: state=Stopped err=<nil>
	I0815 01:29:05.913699   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	W0815 01:29:05.913861   67451 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:29:05.915795   67451 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-018537" ...
	I0815 01:29:05.666194   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:05.666225   67000 machine.go:97] duration metric: took 981.45738ms to provisionDockerMachine
	I0815 01:29:05.666241   67000 start.go:293] postStartSetup for "embed-certs-190398" (driver="kvm2")
	I0815 01:29:05.666253   67000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:05.666275   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.666640   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:05.666671   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.669648   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.670098   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.670124   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.670300   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.670507   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.670677   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.670835   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.750950   67000 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:05.755040   67000 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:05.755066   67000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:05.755139   67000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:05.755244   67000 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:05.755366   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:05.764271   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:05.786563   67000 start.go:296] duration metric: took 120.295403ms for postStartSetup
	I0815 01:29:05.786609   67000 fix.go:56] duration metric: took 19.317192467s for fixHost
	I0815 01:29:05.786634   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.789273   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.789677   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.789708   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.789886   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.790082   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.790244   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.790371   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.790654   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:05.790815   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:05.790826   67000 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:05.893102   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685345.869278337
	
	I0815 01:29:05.893123   67000 fix.go:216] guest clock: 1723685345.869278337
	I0815 01:29:05.893131   67000 fix.go:229] Guest: 2024-08-15 01:29:05.869278337 +0000 UTC Remote: 2024-08-15 01:29:05.786613294 +0000 UTC m=+270.290281945 (delta=82.665043ms)
	I0815 01:29:05.893159   67000 fix.go:200] guest clock delta is within tolerance: 82.665043ms
	I0815 01:29:05.893165   67000 start.go:83] releasing machines lock for "embed-certs-190398", held for 19.423784798s
	I0815 01:29:05.893192   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.893484   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:05.896152   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.896528   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.896555   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.896735   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897183   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897392   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897480   67000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:05.897536   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.897681   67000 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:05.897704   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.900443   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900543   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900814   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.900845   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900873   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.900891   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.901123   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.901150   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.901342   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.901346   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.901531   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.901531   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.901708   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.901709   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:06.008891   67000 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:06.014975   67000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:06.158062   67000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:06.164485   67000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:06.164550   67000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:06.180230   67000 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:06.180250   67000 start.go:495] detecting cgroup driver to use...
	I0815 01:29:06.180301   67000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:06.197927   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:06.210821   67000 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:06.210885   67000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:06.225614   67000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:06.239266   67000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:06.357793   67000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:06.511990   67000 docker.go:233] disabling docker service ...
	I0815 01:29:06.512061   67000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:06.529606   67000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:06.547241   67000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:06.689512   67000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:06.807041   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:06.820312   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:06.837948   67000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:06.838011   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.848233   67000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:06.848311   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.858132   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.868009   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.879629   67000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:06.893713   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.907444   67000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.928032   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.943650   67000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:06.957750   67000 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:06.957805   67000 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:06.972288   67000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:06.982187   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:07.154389   67000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:07.287847   67000 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:07.287933   67000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:07.292283   67000 start.go:563] Will wait 60s for crictl version
	I0815 01:29:07.292342   67000 ssh_runner.go:195] Run: which crictl
	I0815 01:29:07.295813   67000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:07.332788   67000 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:07.332889   67000 ssh_runner.go:195] Run: crio --version
	I0815 01:29:07.359063   67000 ssh_runner.go:195] Run: crio --version
	I0815 01:29:07.387496   67000 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:05.917276   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Start
	I0815 01:29:05.917498   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring networks are active...
	I0815 01:29:05.918269   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring network default is active
	I0815 01:29:05.918599   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring network mk-default-k8s-diff-port-018537 is active
	I0815 01:29:05.919147   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Getting domain xml...
	I0815 01:29:05.919829   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Creating domain...
	I0815 01:29:07.208213   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting to get IP...
	I0815 01:29:07.209456   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.209848   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.209933   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.209843   68264 retry.go:31] will retry after 254.654585ms: waiting for machine to come up
	I0815 01:29:07.466248   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.466679   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.466708   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.466644   68264 retry.go:31] will retry after 285.54264ms: waiting for machine to come up
	I0815 01:29:07.754037   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.754537   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.754578   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.754511   68264 retry.go:31] will retry after 336.150506ms: waiting for machine to come up
	I0815 01:29:08.091923   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.092402   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.092444   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:08.092368   68264 retry.go:31] will retry after 591.285134ms: waiting for machine to come up
	I0815 01:29:08.685380   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.685707   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.685735   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:08.685690   68264 retry.go:31] will retry after 701.709425ms: waiting for machine to come up
	I0815 01:29:09.388574   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:09.389026   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:09.389053   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:09.388979   68264 retry.go:31] will retry after 916.264423ms: waiting for machine to come up
	I0815 01:29:05.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.015647   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.514952   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.014969   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.515614   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.015757   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.515184   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.014931   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.515381   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.015761   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.389220   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:07.392416   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:07.392842   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:07.392868   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:07.393095   67000 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:07.396984   67000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:07.410153   67000 kubeadm.go:883] updating cluster {Name:embed-certs-190398 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:07.410275   67000 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:07.410348   67000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:07.447193   67000 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:07.447255   67000 ssh_runner.go:195] Run: which lz4
	I0815 01:29:07.451046   67000 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:29:07.454808   67000 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:29:07.454836   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:29:08.696070   67000 crio.go:462] duration metric: took 1.245060733s to copy over tarball
	I0815 01:29:08.696174   67000 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:29:10.306552   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:10.306969   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:10.307001   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:10.306912   68264 retry.go:31] will retry after 1.186920529s: waiting for machine to come up
	I0815 01:29:11.494832   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:11.495288   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:11.495324   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:11.495213   68264 retry.go:31] will retry after 1.049148689s: waiting for machine to come up
	I0815 01:29:12.546492   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:12.546872   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:12.546898   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:12.546844   68264 retry.go:31] will retry after 1.689384408s: waiting for machine to come up
	I0815 01:29:14.237471   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:14.238081   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:14.238134   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:14.238011   68264 retry.go:31] will retry after 1.557759414s: waiting for machine to come up
	I0815 01:29:10.515131   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.515740   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.015002   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.515169   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.015676   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.515330   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.015193   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.515742   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.015837   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.809989   67000 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.113786525s)
	I0815 01:29:10.810014   67000 crio.go:469] duration metric: took 2.113915636s to extract the tarball
	I0815 01:29:10.810021   67000 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:29:10.845484   67000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:10.886403   67000 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:29:10.886424   67000 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:29:10.886433   67000 kubeadm.go:934] updating node { 192.168.72.151 8443 v1.31.0 crio true true} ...
	I0815 01:29:10.886550   67000 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-190398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:29:10.886646   67000 ssh_runner.go:195] Run: crio config
	I0815 01:29:10.933915   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:29:10.933946   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:10.933963   67000 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:29:10.933985   67000 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-190398 NodeName:embed-certs-190398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:29:10.934114   67000 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-190398"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:29:10.934179   67000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:29:10.943778   67000 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:29:10.943839   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:29:10.952852   67000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0815 01:29:10.968026   67000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:29:10.982813   67000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0815 01:29:10.998314   67000 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I0815 01:29:11.001818   67000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:11.012933   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:11.147060   67000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:11.170825   67000 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398 for IP: 192.168.72.151
	I0815 01:29:11.170850   67000 certs.go:194] generating shared ca certs ...
	I0815 01:29:11.170871   67000 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:11.171064   67000 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:29:11.171131   67000 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:29:11.171146   67000 certs.go:256] generating profile certs ...
	I0815 01:29:11.171251   67000 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/client.key
	I0815 01:29:11.171359   67000 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.key.7cdd5698
	I0815 01:29:11.171414   67000 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.key
	I0815 01:29:11.171556   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:29:11.171593   67000 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:29:11.171602   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:29:11.171624   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:29:11.171647   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:29:11.171676   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:29:11.171730   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:11.172346   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:29:11.208182   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:29:11.236641   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:29:11.277018   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:29:11.304926   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 01:29:11.335397   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:29:11.358309   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:29:11.380632   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:29:11.403736   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:29:11.425086   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:29:11.448037   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:29:11.470461   67000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:29:11.486415   67000 ssh_runner.go:195] Run: openssl version
	I0815 01:29:11.492028   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:29:11.502925   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.507270   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.507323   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.513051   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:29:11.523911   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:29:11.534614   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.538753   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.538813   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.544194   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:29:11.554387   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:29:11.564690   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.568810   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.568873   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.575936   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:29:11.589152   67000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:29:11.594614   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:29:11.601880   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:29:11.609471   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:29:11.617010   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:29:11.623776   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:29:11.629262   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:29:11.634708   67000 kubeadm.go:392] StartCluster: {Name:embed-certs-190398 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:29:11.634821   67000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:29:11.634890   67000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:11.676483   67000 cri.go:89] found id: ""
	I0815 01:29:11.676559   67000 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:29:11.686422   67000 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:29:11.686445   67000 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:29:11.686494   67000 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:29:11.695319   67000 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:29:11.696472   67000 kubeconfig.go:125] found "embed-certs-190398" server: "https://192.168.72.151:8443"
	I0815 01:29:11.699906   67000 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:29:11.709090   67000 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.151
	I0815 01:29:11.709119   67000 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:29:11.709145   67000 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:29:11.709211   67000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:11.742710   67000 cri.go:89] found id: ""
	I0815 01:29:11.742786   67000 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:29:11.758986   67000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:29:11.768078   67000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:29:11.768100   67000 kubeadm.go:157] found existing configuration files:
	
	I0815 01:29:11.768150   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:29:11.776638   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:29:11.776724   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:29:11.785055   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:29:11.793075   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:29:11.793127   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:29:11.801516   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:29:11.809527   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:29:11.809572   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:29:11.817855   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:29:11.826084   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:29:11.826157   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:29:11.835699   67000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:29:11.844943   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:11.961226   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.030548   67000 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069293244s)
	I0815 01:29:13.030577   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.218385   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.302667   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.397530   67000 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:29:13.397630   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.898538   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.398613   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.897833   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.397759   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.798041   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:15.798467   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:15.798512   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:15.798446   68264 retry.go:31] will retry after 2.538040218s: waiting for machine to come up
	I0815 01:29:18.338522   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:18.338961   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:18.338988   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:18.338910   68264 retry.go:31] will retry after 3.121146217s: waiting for machine to come up
	I0815 01:29:15.515901   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.015290   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.514956   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.015924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.515782   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.014890   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.515482   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.515830   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.015304   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.897957   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.910962   67000 api_server.go:72] duration metric: took 2.513430323s to wait for apiserver process to appear ...
	I0815 01:29:15.910999   67000 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:29:15.911033   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.650453   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:18.650485   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:18.650498   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.686925   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:18.686951   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:18.911228   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.915391   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:18.915424   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:19.412000   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:19.419523   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:19.419562   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:19.911102   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:19.918074   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:19.918110   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:20.411662   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:20.417395   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0815 01:29:20.423058   67000 api_server.go:141] control plane version: v1.31.0
	I0815 01:29:20.423081   67000 api_server.go:131] duration metric: took 4.512072378s to wait for apiserver health ...
	I0815 01:29:20.423089   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:29:20.423095   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:20.424876   67000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:29:20.426131   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:29:20.450961   67000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:29:20.474210   67000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:29:20.486417   67000 system_pods.go:59] 8 kube-system pods found
	I0815 01:29:20.486452   67000 system_pods.go:61] "coredns-6f6b679f8f-kgklr" [5e07a5eb-5ff5-4c1c-9fc7-0a266389c235] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:29:20.486463   67000 system_pods.go:61] "etcd-embed-certs-190398" [11567f44-26c0-4cdc-81f4-d7f88eb415e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:29:20.486480   67000 system_pods.go:61] "kube-apiserver-embed-certs-190398" [da9ce1f1-705f-4b23-ace7-794d277e5d44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:29:20.486495   67000 system_pods.go:61] "kube-controller-manager-embed-certs-190398" [0a4c8153-f94c-4d24-9d2f-38e3eebd8649] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:29:20.486509   67000 system_pods.go:61] "kube-proxy-bmddn" [50e8d666-29d5-45b6-82a7-608402dfb7b1] Running
	I0815 01:29:20.486515   67000 system_pods.go:61] "kube-scheduler-embed-certs-190398" [483d04a2-16c4-4c0d-81e2-dbdfa2141981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:29:20.486520   67000 system_pods.go:61] "metrics-server-6867b74b74-sfnng" [c2088569-2e49-4ccd-bd7c-bcd454e75b1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:29:20.486528   67000 system_pods.go:61] "storage-provisioner" [ad082138-0c63-43a5-8052-5a7126a6ec77] Running
	I0815 01:29:20.486534   67000 system_pods.go:74] duration metric: took 12.306432ms to wait for pod list to return data ...
	I0815 01:29:20.486546   67000 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:29:20.489727   67000 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:29:20.489751   67000 node_conditions.go:123] node cpu capacity is 2
	I0815 01:29:20.489763   67000 node_conditions.go:105] duration metric: took 3.21035ms to run NodePressure ...
	I0815 01:29:20.489782   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:21.461547   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:21.462048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:21.462083   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:21.462013   68264 retry.go:31] will retry after 4.52196822s: waiting for machine to come up
	I0815 01:29:20.515183   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.015283   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.515686   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.015404   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.515935   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.015577   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.515114   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.015146   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.515849   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:25.014883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.750707   67000 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:29:20.766067   67000 kubeadm.go:739] kubelet initialised
	I0815 01:29:20.766089   67000 kubeadm.go:740] duration metric: took 15.355118ms waiting for restarted kubelet to initialise ...
	I0815 01:29:20.766099   67000 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:20.771715   67000 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.778596   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.778617   67000 pod_ready.go:81] duration metric: took 6.879509ms for pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.778630   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.778638   67000 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.783422   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "etcd-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.783450   67000 pod_ready.go:81] duration metric: took 4.801812ms for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.783461   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "etcd-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.783473   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.788877   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.788896   67000 pod_ready.go:81] duration metric: took 5.41319ms for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.788904   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.788909   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:22.795340   67000 pod_ready.go:102] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:25.296907   67000 pod_ready.go:102] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:27.201181   66492 start.go:364] duration metric: took 54.426048174s to acquireMachinesLock for "no-preload-884893"
	I0815 01:29:27.201235   66492 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:29:27.201317   66492 fix.go:54] fixHost starting: 
	I0815 01:29:27.201776   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:27.201818   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:27.218816   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0815 01:29:27.219223   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:27.219731   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:29:27.219754   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:27.220146   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:27.220342   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:27.220507   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:29:27.221962   66492 fix.go:112] recreateIfNeeded on no-preload-884893: state=Stopped err=<nil>
	I0815 01:29:27.221988   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	W0815 01:29:27.222177   66492 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:29:27.224523   66492 out.go:177] * Restarting existing kvm2 VM for "no-preload-884893" ...
	I0815 01:29:25.986027   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.986585   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Found IP for machine: 192.168.39.223
	I0815 01:29:25.986616   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has current primary IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.986629   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Reserving static IP address...
	I0815 01:29:25.987034   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-018537", mac: "52:54:00:ec:53:52", ip: "192.168.39.223"} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:25.987066   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | skip adding static IP to network mk-default-k8s-diff-port-018537 - found existing host DHCP lease matching {name: "default-k8s-diff-port-018537", mac: "52:54:00:ec:53:52", ip: "192.168.39.223"}
	I0815 01:29:25.987085   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Reserved static IP address: 192.168.39.223
	I0815 01:29:25.987108   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for SSH to be available...
	I0815 01:29:25.987124   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Getting to WaitForSSH function...
	I0815 01:29:25.989426   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.989800   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:25.989831   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.989937   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Using SSH client type: external
	I0815 01:29:25.989962   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa (-rw-------)
	I0815 01:29:25.990011   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:25.990026   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | About to run SSH command:
	I0815 01:29:25.990048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | exit 0
	I0815 01:29:26.121218   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:26.121655   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetConfigRaw
	I0815 01:29:26.122265   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:26.125083   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.125483   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.125513   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.125757   67451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:29:26.125978   67451 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:26.126004   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:26.126235   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.128419   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.128787   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.128814   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.128963   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.129124   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.129274   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.129420   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.129603   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.129828   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.129843   67451 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:26.236866   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:26.236900   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.237136   67451 buildroot.go:166] provisioning hostname "default-k8s-diff-port-018537"
	I0815 01:29:26.237158   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.237334   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.240243   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.240760   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.240791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.240959   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.241203   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.241415   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.241581   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.241741   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.241903   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.241916   67451 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-018537 && echo "default-k8s-diff-port-018537" | sudo tee /etc/hostname
	I0815 01:29:26.358127   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-018537
	
	I0815 01:29:26.358159   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.361276   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.361664   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.361694   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.361841   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.362013   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.362191   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.362368   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.362517   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.362704   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.362729   67451 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-018537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-018537/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-018537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:26.479326   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:26.479357   67451 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:26.479398   67451 buildroot.go:174] setting up certificates
	I0815 01:29:26.479411   67451 provision.go:84] configureAuth start
	I0815 01:29:26.479440   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.479791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:26.482464   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.482845   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.482873   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.483023   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.485502   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.485960   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.485995   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.486135   67451 provision.go:143] copyHostCerts
	I0815 01:29:26.486194   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:26.486214   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:26.486273   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:26.486384   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:26.486394   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:26.486419   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:26.486480   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:26.486487   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:26.486508   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:26.486573   67451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-018537 san=[127.0.0.1 192.168.39.223 default-k8s-diff-port-018537 localhost minikube]
	I0815 01:29:26.563251   67451 provision.go:177] copyRemoteCerts
	I0815 01:29:26.563309   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:26.563337   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.566141   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.566481   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.566506   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.566737   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.566947   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.567087   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.567208   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:26.650593   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:26.673166   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0815 01:29:26.695563   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:29:26.717169   67451 provision.go:87] duration metric: took 237.742408ms to configureAuth
	I0815 01:29:26.717198   67451 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:26.717373   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:26.717453   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.720247   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.720620   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.720648   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.720815   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.721007   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.721176   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.721302   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.721484   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.721663   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.721681   67451 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:26.972647   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:26.972691   67451 machine.go:97] duration metric: took 846.694776ms to provisionDockerMachine
	I0815 01:29:26.972706   67451 start.go:293] postStartSetup for "default-k8s-diff-port-018537" (driver="kvm2")
	I0815 01:29:26.972716   67451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:26.972731   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:26.973032   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:26.973053   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.975828   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.976300   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.976334   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.976531   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.976827   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.976999   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.977111   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.059130   67451 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:27.062867   67451 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:27.062893   67451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:27.062954   67451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:27.063024   67451 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:27.063119   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:27.072111   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:27.093976   67451 start.go:296] duration metric: took 121.256938ms for postStartSetup
	I0815 01:29:27.094023   67451 fix.go:56] duration metric: took 21.200666941s for fixHost
	I0815 01:29:27.094048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.096548   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.096881   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.096912   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.097059   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.097238   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.097400   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.097511   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.097664   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:27.097842   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:27.097858   67451 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:27.201028   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685367.180566854
	
	I0815 01:29:27.201053   67451 fix.go:216] guest clock: 1723685367.180566854
	I0815 01:29:27.201062   67451 fix.go:229] Guest: 2024-08-15 01:29:27.180566854 +0000 UTC Remote: 2024-08-15 01:29:27.094027897 +0000 UTC m=+201.997769057 (delta=86.538957ms)
	I0815 01:29:27.201100   67451 fix.go:200] guest clock delta is within tolerance: 86.538957ms
	I0815 01:29:27.201107   67451 start.go:83] releasing machines lock for "default-k8s-diff-port-018537", held for 21.307794339s
	I0815 01:29:27.201135   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.201522   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:27.204278   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.204674   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.204703   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.204934   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205501   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205713   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205800   67451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:27.205849   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.206127   67451 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:27.206149   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.208688   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.208858   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209066   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.209092   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209394   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.209551   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.209552   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.209584   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209741   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.209748   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.209952   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.210001   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.210090   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.210256   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.293417   67451 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:27.329491   67451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:27.473782   67451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:27.480357   67451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:27.480432   67451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:27.499552   67451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:27.499582   67451 start.go:495] detecting cgroup driver to use...
	I0815 01:29:27.499650   67451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:27.515626   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:27.534025   67451 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:27.534098   67451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:27.547536   67451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:27.561135   67451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:27.672622   67451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:27.832133   67451 docker.go:233] disabling docker service ...
	I0815 01:29:27.832210   67451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:27.845647   67451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:27.858233   67451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:27.985504   67451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:28.119036   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:28.133844   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:28.151116   67451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:28.151188   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.162173   67451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:28.162250   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.171954   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.182363   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.192943   67451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:28.203684   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.214360   67451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.230572   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.241283   67451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:28.250743   67451 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:28.250804   67451 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:28.263655   67451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:28.273663   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:28.408232   67451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:28.558860   67451 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:28.558933   67451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:28.564390   67451 start.go:563] Will wait 60s for crictl version
	I0815 01:29:28.564508   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:29:28.568351   67451 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:28.616006   67451 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:28.616094   67451 ssh_runner.go:195] Run: crio --version
	I0815 01:29:28.642621   67451 ssh_runner.go:195] Run: crio --version
	I0815 01:29:28.671150   67451 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:28.672626   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:28.675626   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:28.676004   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:28.676038   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:28.676296   67451 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:28.680836   67451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:28.694402   67451 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:28.694519   67451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:28.694574   67451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:28.730337   67451 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:28.730401   67451 ssh_runner.go:195] Run: which lz4
	I0815 01:29:28.734226   67451 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:29:28.738162   67451 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:29:28.738185   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:29:30.016492   67451 crio.go:462] duration metric: took 1.282301387s to copy over tarball
	I0815 01:29:30.016571   67451 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:29:25.515881   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.015741   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.515122   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.014889   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.515108   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.015604   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.515658   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.015319   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.515225   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.015561   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.225775   66492 main.go:141] libmachine: (no-preload-884893) Calling .Start
	I0815 01:29:27.225974   66492 main.go:141] libmachine: (no-preload-884893) Ensuring networks are active...
	I0815 01:29:27.226702   66492 main.go:141] libmachine: (no-preload-884893) Ensuring network default is active
	I0815 01:29:27.227078   66492 main.go:141] libmachine: (no-preload-884893) Ensuring network mk-no-preload-884893 is active
	I0815 01:29:27.227577   66492 main.go:141] libmachine: (no-preload-884893) Getting domain xml...
	I0815 01:29:27.228376   66492 main.go:141] libmachine: (no-preload-884893) Creating domain...
	I0815 01:29:28.609215   66492 main.go:141] libmachine: (no-preload-884893) Waiting to get IP...
	I0815 01:29:28.610043   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:28.610440   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:28.610487   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:28.610415   68431 retry.go:31] will retry after 305.851347ms: waiting for machine to come up
	I0815 01:29:28.918245   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:28.918747   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:28.918770   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:28.918720   68431 retry.go:31] will retry after 368.045549ms: waiting for machine to come up
	I0815 01:29:29.288313   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:29.289013   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:29.289046   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:29.288958   68431 retry.go:31] will retry after 415.68441ms: waiting for machine to come up
	I0815 01:29:29.706767   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:29.707226   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:29.707249   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:29.707180   68431 retry.go:31] will retry after 575.538038ms: waiting for machine to come up
	I0815 01:29:26.795064   67000 pod_ready.go:92] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:26.795085   67000 pod_ready.go:81] duration metric: took 6.006168181s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.795096   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bmddn" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.799159   67000 pod_ready.go:92] pod "kube-proxy-bmddn" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:26.799176   67000 pod_ready.go:81] duration metric: took 4.074526ms for pod "kube-proxy-bmddn" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.799184   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:28.805591   67000 pod_ready.go:102] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:30.306235   67000 pod_ready.go:92] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:30.306262   67000 pod_ready.go:81] duration metric: took 3.507070811s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:30.306273   67000 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:32.131219   67451 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.114619197s)
	I0815 01:29:32.131242   67451 crio.go:469] duration metric: took 2.114723577s to extract the tarball
	I0815 01:29:32.131249   67451 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:29:32.169830   67451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:32.217116   67451 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:29:32.217139   67451 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:29:32.217146   67451 kubeadm.go:934] updating node { 192.168.39.223 8444 v1.31.0 crio true true} ...
	I0815 01:29:32.217245   67451 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-018537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:29:32.217305   67451 ssh_runner.go:195] Run: crio config
	I0815 01:29:32.272237   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:29:32.272257   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:32.272270   67451 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:29:32.272292   67451 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.223 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-018537 NodeName:default-k8s-diff-port-018537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:29:32.272435   67451 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.223
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-018537"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:29:32.272486   67451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:29:32.282454   67451 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:29:32.282510   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:29:32.291448   67451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0815 01:29:32.307026   67451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:29:32.324183   67451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0815 01:29:32.339298   67451 ssh_runner.go:195] Run: grep 192.168.39.223	control-plane.minikube.internal$ /etc/hosts
	I0815 01:29:32.342644   67451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:32.353518   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:32.468014   67451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:32.484049   67451 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537 for IP: 192.168.39.223
	I0815 01:29:32.484075   67451 certs.go:194] generating shared ca certs ...
	I0815 01:29:32.484097   67451 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:32.484263   67451 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:29:32.484313   67451 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:29:32.484326   67451 certs.go:256] generating profile certs ...
	I0815 01:29:32.484436   67451 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.key
	I0815 01:29:32.484511   67451 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.key.141a85fa
	I0815 01:29:32.484564   67451 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.key
	I0815 01:29:32.484747   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:29:32.484787   67451 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:29:32.484797   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:29:32.484828   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:29:32.484869   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:29:32.484896   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:29:32.484953   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:32.485741   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:29:32.521657   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:29:32.556226   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:29:32.585724   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:29:32.619588   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 01:29:32.649821   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:29:32.677343   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:29:32.699622   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:29:32.721142   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:29:32.742388   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:29:32.766476   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:29:32.788341   67451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:29:32.803728   67451 ssh_runner.go:195] Run: openssl version
	I0815 01:29:32.809178   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:29:32.819091   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.823068   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.823119   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.828361   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:29:32.837721   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:29:32.847217   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.851176   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.851220   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.856303   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:29:32.865672   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:29:32.875695   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.879910   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.879961   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.885240   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:29:32.894951   67451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:29:32.899131   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:29:32.904465   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:29:32.910243   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:29:32.915874   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:29:32.921193   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:29:32.926569   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:29:32.931905   67451 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:29:32.932015   67451 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:29:32.932095   67451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:32.967184   67451 cri.go:89] found id: ""
	I0815 01:29:32.967270   67451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:29:32.977083   67451 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:29:32.977105   67451 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:29:32.977146   67451 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:29:32.986934   67451 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:29:32.988393   67451 kubeconfig.go:125] found "default-k8s-diff-port-018537" server: "https://192.168.39.223:8444"
	I0815 01:29:32.991478   67451 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:29:33.000175   67451 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.223
	I0815 01:29:33.000201   67451 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:29:33.000211   67451 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:29:33.000260   67451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:33.042092   67451 cri.go:89] found id: ""
	I0815 01:29:33.042173   67451 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:29:33.058312   67451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:29:33.067931   67451 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:29:33.067951   67451 kubeadm.go:157] found existing configuration files:
	
	I0815 01:29:33.068005   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0815 01:29:33.076467   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:29:33.076532   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:29:33.085318   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0815 01:29:33.093657   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:29:33.093710   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:29:33.102263   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0815 01:29:33.110120   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:29:33.110166   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:29:33.118497   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0815 01:29:33.126969   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:29:33.127017   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:29:33.135332   67451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:29:33.143869   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:33.257728   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.000703   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.223362   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.296248   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.400251   67451 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:29:34.400365   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.901010   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.515518   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.015099   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.514899   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.015422   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.515483   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.015471   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.515843   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.015059   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.514953   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.015692   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.283919   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:30.284357   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:30.284387   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:30.284314   68431 retry.go:31] will retry after 737.00152ms: waiting for machine to come up
	I0815 01:29:31.023083   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:31.023593   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:31.023620   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:31.023541   68431 retry.go:31] will retry after 851.229647ms: waiting for machine to come up
	I0815 01:29:31.876610   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:31.877022   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:31.877051   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:31.876972   68431 retry.go:31] will retry after 914.072719ms: waiting for machine to come up
	I0815 01:29:32.792245   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:32.792723   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:32.792749   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:32.792674   68431 retry.go:31] will retry after 1.383936582s: waiting for machine to come up
	I0815 01:29:34.178425   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:34.178889   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:34.178928   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:34.178825   68431 retry.go:31] will retry after 1.574004296s: waiting for machine to come up
	I0815 01:29:32.314820   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:34.812868   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:35.400782   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.900844   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.400575   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.900769   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.916400   67451 api_server.go:72] duration metric: took 2.516148893s to wait for apiserver process to appear ...
	I0815 01:29:36.916432   67451 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:29:36.916458   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.650207   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:39.650234   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:39.650246   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.704636   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:39.704687   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:39.917074   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.921711   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:39.921742   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:35.514869   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.015361   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.515461   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.015560   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.514995   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.015431   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.515382   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.014971   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.515702   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:40.015185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.754518   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:35.755025   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:35.755049   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:35.754951   68431 retry.go:31] will retry after 1.763026338s: waiting for machine to come up
	I0815 01:29:37.519406   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:37.519910   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:37.519940   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:37.519857   68431 retry.go:31] will retry after 1.953484546s: waiting for machine to come up
	I0815 01:29:39.475118   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:39.475481   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:39.475617   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:39.475446   68431 retry.go:31] will retry after 3.524055081s: waiting for machine to come up
	I0815 01:29:36.813811   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:39.312364   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:40.417362   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:40.421758   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:40.421793   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:40.917290   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:40.929914   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:40.929979   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:41.417095   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:41.422436   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 200:
	ok
	I0815 01:29:41.430162   67451 api_server.go:141] control plane version: v1.31.0
	I0815 01:29:41.430190   67451 api_server.go:131] duration metric: took 4.513750685s to wait for apiserver health ...
	I0815 01:29:41.430201   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:29:41.430210   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:41.432041   67451 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:29:41.433158   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:29:41.465502   67451 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:29:41.488013   67451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:29:41.500034   67451 system_pods.go:59] 8 kube-system pods found
	I0815 01:29:41.500063   67451 system_pods.go:61] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:29:41.500071   67451 system_pods.go:61] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:29:41.500087   67451 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:29:41.500098   67451 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:29:41.500102   67451 system_pods.go:61] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:29:41.500107   67451 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:29:41.500117   67451 system_pods.go:61] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:29:41.500120   67451 system_pods.go:61] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:29:41.500126   67451 system_pods.go:74] duration metric: took 12.091408ms to wait for pod list to return data ...
	I0815 01:29:41.500137   67451 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:29:41.505113   67451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:29:41.505137   67451 node_conditions.go:123] node cpu capacity is 2
	I0815 01:29:41.505154   67451 node_conditions.go:105] duration metric: took 5.005028ms to run NodePressure ...
	I0815 01:29:41.505170   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:41.761818   67451 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:29:41.767941   67451 kubeadm.go:739] kubelet initialised
	I0815 01:29:41.767972   67451 kubeadm.go:740] duration metric: took 6.119306ms waiting for restarted kubelet to initialise ...
	I0815 01:29:41.767980   67451 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:41.774714   67451 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.782833   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.782861   67451 pod_ready.go:81] duration metric: took 8.124705ms for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.782870   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.782877   67451 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.790225   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.790248   67451 pod_ready.go:81] duration metric: took 7.36386ms for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.790259   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.790265   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.797569   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.797592   67451 pod_ready.go:81] duration metric: took 7.320672ms for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.797605   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.797611   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.891391   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.891423   67451 pod_ready.go:81] duration metric: took 93.801865ms for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.891435   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.891442   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:42.291752   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-proxy-s8mfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.291780   67451 pod_ready.go:81] duration metric: took 400.332851ms for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:42.291789   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-proxy-s8mfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.291795   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:42.691923   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.691958   67451 pod_ready.go:81] duration metric: took 400.15227ms for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:42.691970   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.691977   67451 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:43.091932   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:43.091958   67451 pod_ready.go:81] duration metric: took 399.974795ms for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:43.091970   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:43.091976   67451 pod_ready.go:38] duration metric: took 1.323989077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:43.091990   67451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:29:43.103131   67451 ops.go:34] apiserver oom_adj: -16
	I0815 01:29:43.103155   67451 kubeadm.go:597] duration metric: took 10.126043167s to restartPrimaryControlPlane
	I0815 01:29:43.103165   67451 kubeadm.go:394] duration metric: took 10.171275892s to StartCluster
	I0815 01:29:43.103183   67451 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:43.103269   67451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:29:43.105655   67451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:43.105963   67451 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:29:43.106027   67451 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:29:43.106123   67451 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106142   67451 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106162   67451 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.106178   67451 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:29:43.106187   67451 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106200   67451 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-018537"
	I0815 01:29:43.106226   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.106255   67451 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.106274   67451 addons.go:243] addon metrics-server should already be in state true
	I0815 01:29:43.106203   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:43.106363   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.106702   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106731   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.106708   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106789   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106822   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.106963   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.107834   67451 out.go:177] * Verifying Kubernetes components...
	I0815 01:29:43.109186   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:43.127122   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46271
	I0815 01:29:43.127378   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I0815 01:29:43.127380   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I0815 01:29:43.127678   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.127791   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.128078   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.128296   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.128323   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.128466   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.128480   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.128671   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.128844   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.129231   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.129263   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.129768   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.129817   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.130089   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.130125   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.130219   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.130448   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.134347   67451 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.134366   67451 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:29:43.134394   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.134764   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.134801   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.148352   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0815 01:29:43.148713   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0815 01:29:43.148786   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.149196   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.149378   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.149420   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.149838   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.149863   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.149891   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.150092   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.150344   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.150698   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.152063   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.152848   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.154165   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0815 01:29:43.154664   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.155020   67451 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:43.155087   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.155110   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.155596   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.156124   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.156166   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.156340   67451 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:29:43.156366   67451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:29:43.156389   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.157988   67451 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:29:43.159283   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:29:43.159299   67451 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:29:43.159319   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.159668   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.160304   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.160373   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.160866   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.161069   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.161234   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.161395   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.162257   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.162673   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.162702   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.162838   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.163007   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.163179   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.163296   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.175175   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0815 01:29:43.175674   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.176169   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.176193   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.176566   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.176824   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.178342   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.178584   67451 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:29:43.178597   67451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:29:43.178615   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.181058   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.181448   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.181482   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.181577   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.181709   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.181791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.181873   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.318078   67451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:43.341037   67451 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-018537" to be "Ready" ...
	I0815 01:29:43.400964   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:29:43.400993   67451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:29:43.423693   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:29:43.423716   67451 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:29:43.430460   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:29:43.453562   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:29:43.453587   67451 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:29:43.457038   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:29:43.495707   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:29:44.708047   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.25097545s)
	I0815 01:29:44.708106   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708111   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.212373458s)
	I0815 01:29:44.708119   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708129   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708141   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708135   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.277646183s)
	I0815 01:29:44.708182   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708201   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708391   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708409   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708419   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708428   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708531   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.708562   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708568   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708577   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.708586   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708587   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708599   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708605   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708613   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708648   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708614   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708678   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.710192   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.710210   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.710220   67451 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-018537"
	I0815 01:29:44.710196   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.710447   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.710467   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.716452   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.716468   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.716716   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.716737   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.718650   67451 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0815 01:29:44.719796   67451 addons.go:510] duration metric: took 1.613772622s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0815 01:29:40.514981   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.015724   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.515316   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.515738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.515747   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.015794   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:45.015384   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.000581   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:43.001092   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:43.001116   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:43.001045   68431 retry.go:31] will retry after 4.175502286s: waiting for machine to come up
	I0815 01:29:41.313801   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:43.814135   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:47.178102   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.178637   66492 main.go:141] libmachine: (no-preload-884893) Found IP for machine: 192.168.61.166
	I0815 01:29:47.178665   66492 main.go:141] libmachine: (no-preload-884893) Reserving static IP address...
	I0815 01:29:47.178678   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has current primary IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.179108   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "no-preload-884893", mac: "52:54:00:b7:93:c6", ip: "192.168.61.166"} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.179151   66492 main.go:141] libmachine: (no-preload-884893) DBG | skip adding static IP to network mk-no-preload-884893 - found existing host DHCP lease matching {name: "no-preload-884893", mac: "52:54:00:b7:93:c6", ip: "192.168.61.166"}
	I0815 01:29:47.179169   66492 main.go:141] libmachine: (no-preload-884893) Reserved static IP address: 192.168.61.166
	I0815 01:29:47.179188   66492 main.go:141] libmachine: (no-preload-884893) Waiting for SSH to be available...
	I0815 01:29:47.179204   66492 main.go:141] libmachine: (no-preload-884893) DBG | Getting to WaitForSSH function...
	I0815 01:29:47.181522   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.181909   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.181937   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.182038   66492 main.go:141] libmachine: (no-preload-884893) DBG | Using SSH client type: external
	I0815 01:29:47.182070   66492 main.go:141] libmachine: (no-preload-884893) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa (-rw-------)
	I0815 01:29:47.182105   66492 main.go:141] libmachine: (no-preload-884893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.166 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:47.182126   66492 main.go:141] libmachine: (no-preload-884893) DBG | About to run SSH command:
	I0815 01:29:47.182156   66492 main.go:141] libmachine: (no-preload-884893) DBG | exit 0
	I0815 01:29:47.309068   66492 main.go:141] libmachine: (no-preload-884893) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:47.309492   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetConfigRaw
	I0815 01:29:47.310181   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:47.312956   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.313296   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.313327   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.313503   66492 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/config.json ...
	I0815 01:29:47.313720   66492 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:47.313742   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:47.313965   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.315987   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.316252   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.316278   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.316399   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.316555   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.316741   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.316886   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.317071   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.317250   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.317263   66492 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:47.424862   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:47.424894   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.425125   66492 buildroot.go:166] provisioning hostname "no-preload-884893"
	I0815 01:29:47.425156   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.425353   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.428397   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.428802   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.428825   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.429003   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.429185   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.429336   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.429464   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.429650   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.429863   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.429881   66492 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-884893 && echo "no-preload-884893" | sudo tee /etc/hostname
	I0815 01:29:47.552134   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-884893
	
	I0815 01:29:47.552159   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.554997   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.555458   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.555500   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.555742   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.555975   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.556148   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.556320   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.556525   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.556707   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.556733   66492 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-884893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-884893/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-884893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:47.673572   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:47.673608   66492 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:47.673637   66492 buildroot.go:174] setting up certificates
	I0815 01:29:47.673653   66492 provision.go:84] configureAuth start
	I0815 01:29:47.673670   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.674016   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:47.677054   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.677491   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.677526   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.677588   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.680115   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.680510   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.680539   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.680719   66492 provision.go:143] copyHostCerts
	I0815 01:29:47.680772   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:47.680789   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:47.680846   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:47.680962   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:47.680970   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:47.680992   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:47.681057   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:47.681064   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:47.681081   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:47.681129   66492 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.no-preload-884893 san=[127.0.0.1 192.168.61.166 localhost minikube no-preload-884893]
	I0815 01:29:47.828342   66492 provision.go:177] copyRemoteCerts
	I0815 01:29:47.828395   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:47.828416   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.831163   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.831546   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.831576   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.831760   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.831948   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.832109   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.832218   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:47.914745   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:47.938252   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 01:29:47.960492   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:29:47.982681   66492 provision.go:87] duration metric: took 309.010268ms to configureAuth
	I0815 01:29:47.982714   66492 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:47.982971   66492 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:47.983095   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.985798   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.986181   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.986213   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.986383   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.986584   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.986748   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.986935   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.987115   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.987328   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.987346   66492 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:48.264004   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:48.264027   66492 machine.go:97] duration metric: took 950.293757ms to provisionDockerMachine
	I0815 01:29:48.264037   66492 start.go:293] postStartSetup for "no-preload-884893" (driver="kvm2")
	I0815 01:29:48.264047   66492 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:48.264060   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.264375   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:48.264401   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.267376   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.267859   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.267888   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.268115   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.268334   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.268521   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.268713   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.351688   66492 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:48.356871   66492 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:48.356897   66492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:48.356977   66492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:48.357078   66492 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:48.357194   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:48.369590   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:48.397339   66492 start.go:296] duration metric: took 133.287217ms for postStartSetup
	I0815 01:29:48.397389   66492 fix.go:56] duration metric: took 21.196078137s for fixHost
	I0815 01:29:48.397434   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.400353   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.400792   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.400831   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.401118   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.401352   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.401509   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.401707   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.401914   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:48.402132   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:48.402148   66492 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:48.518704   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685388.495787154
	
	I0815 01:29:48.518731   66492 fix.go:216] guest clock: 1723685388.495787154
	I0815 01:29:48.518743   66492 fix.go:229] Guest: 2024-08-15 01:29:48.495787154 +0000 UTC Remote: 2024-08-15 01:29:48.397394567 +0000 UTC m=+358.213942436 (delta=98.392587ms)
	I0815 01:29:48.518771   66492 fix.go:200] guest clock delta is within tolerance: 98.392587ms
	I0815 01:29:48.518779   66492 start.go:83] releasing machines lock for "no-preload-884893", held for 21.317569669s
	I0815 01:29:48.518808   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.519146   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:48.522001   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.522428   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.522461   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.522626   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523145   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523490   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523580   66492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:48.523634   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.523747   66492 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:48.523768   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.527031   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527128   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527408   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.527473   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527563   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.527592   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527709   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.527781   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.527943   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.528173   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.528177   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.528305   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.528417   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.528598   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.610614   66492 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:48.647464   66492 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:48.786666   66492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:48.792525   66492 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:48.792593   66492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:48.807904   66492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:48.807924   66492 start.go:495] detecting cgroup driver to use...
	I0815 01:29:48.807975   66492 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:48.826113   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:48.839376   66492 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:48.839443   66492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:48.852840   66492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:48.866029   66492 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:48.974628   66492 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:49.141375   66492 docker.go:233] disabling docker service ...
	I0815 01:29:49.141447   66492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:49.155650   66492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:49.168527   66492 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:49.295756   66492 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:49.430096   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:49.443508   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:49.460504   66492 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:49.460567   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.470309   66492 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:49.470376   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.480340   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.490326   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.500831   66492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:49.511629   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.522350   66492 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.541871   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.553334   66492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:49.562756   66492 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:49.562817   66492 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:49.575907   66492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:49.586017   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:49.709089   66492 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:49.848506   66492 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:49.848599   66492 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:49.853379   66492 start.go:563] Will wait 60s for crictl version
	I0815 01:29:49.853442   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:49.857695   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:49.897829   66492 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:49.897909   66492 ssh_runner.go:195] Run: crio --version
	I0815 01:29:49.927253   66492 ssh_runner.go:195] Run: crio --version
	I0815 01:29:49.956689   66492 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:45.345209   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:47.844877   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:49.845546   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:45.515828   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.015564   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.515829   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.014916   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.515308   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.014871   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.515182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.015946   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.514892   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.015788   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.957823   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:49.960376   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:49.960741   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:49.960771   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:49.960975   66492 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:49.964703   66492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:49.975918   66492 kubeadm.go:883] updating cluster {Name:no-preload-884893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:49.976078   66492 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:49.976130   66492 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:50.007973   66492 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:50.007997   66492 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:29:50.008034   66492 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:50.008076   66492 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.008092   66492 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.008147   66492 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 01:29:50.008167   66492 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.008238   66492 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.008261   66492 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.008535   66492 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.009666   66492 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.009734   66492 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.009745   66492 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:50.009748   66492 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.009734   66492 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.009768   66492 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.009775   66492 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.009801   66492 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 01:29:46.312368   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:48.312568   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.313249   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.347683   67451 node_ready.go:49] node "default-k8s-diff-port-018537" has status "Ready":"True"
	I0815 01:29:50.347704   67451 node_ready.go:38] duration metric: took 7.006638337s for node "default-k8s-diff-port-018537" to be "Ready" ...
	I0815 01:29:50.347713   67451 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:50.358505   67451 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.364110   67451 pod_ready.go:92] pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.364139   67451 pod_ready.go:81] duration metric: took 5.600464ms for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.364150   67451 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.370186   67451 pod_ready.go:92] pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.370212   67451 pod_ready.go:81] duration metric: took 6.054189ms for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.370223   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.380051   67451 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.380089   67451 pod_ready.go:81] duration metric: took 9.848463ms for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.380107   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.385988   67451 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.386015   67451 pod_ready.go:81] duration metric: took 2.005899675s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.386027   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.390635   67451 pod_ready.go:92] pod "kube-proxy-s8mfb" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.390654   67451 pod_ready.go:81] duration metric: took 4.620554ms for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.390663   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.745424   67451 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.745447   67451 pod_ready.go:81] duration metric: took 354.777631ms for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.745458   67451 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:54.752243   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.515037   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.015346   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.514948   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.015826   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.514876   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.015522   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.515665   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.015480   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.515202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:55.014921   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.224358   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.237723   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0815 01:29:50.240904   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.273259   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.275978   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.277287   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.293030   66492 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0815 01:29:50.293078   66492 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.293135   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.293169   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.425265   66492 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0815 01:29:50.425285   66492 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0815 01:29:50.425307   66492 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0815 01:29:50.425319   66492 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.425319   66492 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.425326   66492 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.425367   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425374   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425375   66492 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0815 01:29:50.425390   66492 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.425415   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425409   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425427   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.425436   66492 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0815 01:29:50.425451   66492 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.425471   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.438767   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.438827   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.477250   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.477290   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.477347   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.477399   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.507338   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.527412   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.618767   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.623557   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.623650   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.623741   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.623773   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.668092   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.738811   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.747865   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 01:29:50.747932   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.747953   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 01:29:50.747983   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.748016   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:50.748026   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.777047   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 01:29:50.777152   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:50.811559   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 01:29:50.811678   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:50.829106   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0815 01:29:50.829115   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0815 01:29:50.829131   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.829161   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0815 01:29:50.829178   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.829206   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:29:50.829276   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 01:29:50.829287   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0815 01:29:50.829319   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0815 01:29:50.829360   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:50.833595   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0815 01:29:50.869008   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.899406   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.070205124s)
	I0815 01:29:52.899446   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0815 01:29:52.899444   66492 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0: (2.070218931s)
	I0815 01:29:52.899466   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:52.899475   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0815 01:29:52.899477   66492 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.03044186s)
	I0815 01:29:52.899510   66492 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0815 01:29:52.899516   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:52.899534   66492 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.899573   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:54.750498   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.850957835s)
	I0815 01:29:54.750533   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0815 01:29:54.750530   66492 ssh_runner.go:235] Completed: which crictl: (1.850936309s)
	I0815 01:29:54.750567   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:54.750593   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:54.750609   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:54.787342   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.314561   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:54.813265   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:56.752530   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:58.752625   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:55.515921   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:55.516020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:55.556467   66919 cri.go:89] found id: ""
	I0815 01:29:55.556495   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.556506   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:55.556514   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:55.556584   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:55.591203   66919 cri.go:89] found id: ""
	I0815 01:29:55.591227   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.591234   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:55.591240   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:55.591319   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:55.628819   66919 cri.go:89] found id: ""
	I0815 01:29:55.628847   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.628858   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:55.628865   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:55.628934   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:55.673750   66919 cri.go:89] found id: ""
	I0815 01:29:55.673779   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.673790   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:55.673798   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:55.673857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:55.717121   66919 cri.go:89] found id: ""
	I0815 01:29:55.717153   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.717164   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:55.717171   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:55.717233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:55.753387   66919 cri.go:89] found id: ""
	I0815 01:29:55.753415   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.753425   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:55.753434   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:55.753507   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:55.787148   66919 cri.go:89] found id: ""
	I0815 01:29:55.787183   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.787194   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:55.787207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:55.787272   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:55.820172   66919 cri.go:89] found id: ""
	I0815 01:29:55.820212   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.820226   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:55.820238   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:55.820260   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:55.869089   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:55.869120   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:55.882614   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:55.882644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:56.004286   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:56.004364   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:56.004382   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:56.077836   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:56.077873   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:58.628976   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:58.642997   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:58.643074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:58.675870   66919 cri.go:89] found id: ""
	I0815 01:29:58.675906   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.675916   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:58.675921   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:58.675971   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:58.708231   66919 cri.go:89] found id: ""
	I0815 01:29:58.708263   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.708271   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:58.708277   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:58.708347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:58.744121   66919 cri.go:89] found id: ""
	I0815 01:29:58.744151   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.744162   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:58.744169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:58.744231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:58.783191   66919 cri.go:89] found id: ""
	I0815 01:29:58.783225   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.783238   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:58.783246   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:58.783315   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:58.821747   66919 cri.go:89] found id: ""
	I0815 01:29:58.821775   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.821785   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:58.821801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:58.821865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:58.859419   66919 cri.go:89] found id: ""
	I0815 01:29:58.859450   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.859458   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:58.859463   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:58.859520   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:58.900959   66919 cri.go:89] found id: ""
	I0815 01:29:58.900988   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.900999   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:58.901006   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:58.901069   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:58.940714   66919 cri.go:89] found id: ""
	I0815 01:29:58.940746   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.940758   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:58.940779   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:58.940796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:58.956973   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:58.957004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:59.024399   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:59.024426   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:59.024439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:59.106170   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:59.106210   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:59.142151   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:59.142181   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:56.948465   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.1978264s)
	I0815 01:29:56.948496   66492 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.161116111s)
	I0815 01:29:56.948602   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:56.948503   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0815 01:29:56.948644   66492 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:56.948718   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:56.985210   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 01:29:56.985331   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:29:58.731174   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.782427987s)
	I0815 01:29:58.731211   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0815 01:29:58.731234   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:58.731284   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:58.731184   66492 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.745828896s)
	I0815 01:29:58.731343   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0815 01:29:57.313743   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:59.814068   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:00.752802   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:02.752939   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:01.696371   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:01.709675   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:01.709748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:01.747907   66919 cri.go:89] found id: ""
	I0815 01:30:01.747934   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.747941   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:01.747949   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:01.748009   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:01.785404   66919 cri.go:89] found id: ""
	I0815 01:30:01.785429   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.785437   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:01.785442   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:01.785499   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:01.820032   66919 cri.go:89] found id: ""
	I0815 01:30:01.820060   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.820068   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:01.820073   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:01.820134   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:01.853219   66919 cri.go:89] found id: ""
	I0815 01:30:01.853257   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.853268   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:01.853276   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:01.853331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:01.895875   66919 cri.go:89] found id: ""
	I0815 01:30:01.895903   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.895915   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:01.895922   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:01.895983   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:01.929753   66919 cri.go:89] found id: ""
	I0815 01:30:01.929785   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.929796   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:01.929803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:01.929865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:01.961053   66919 cri.go:89] found id: ""
	I0815 01:30:01.961087   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.961099   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:01.961107   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:01.961174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:01.993217   66919 cri.go:89] found id: ""
	I0815 01:30:01.993247   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.993258   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:01.993268   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:01.993287   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:02.051367   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:02.051400   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:02.065818   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:02.065851   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:02.150692   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:02.150721   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:02.150738   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:02.262369   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:02.262406   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:04.813873   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:04.829471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:04.829549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:04.871020   66919 cri.go:89] found id: ""
	I0815 01:30:04.871049   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.871058   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:04.871064   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:04.871131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:04.924432   66919 cri.go:89] found id: ""
	I0815 01:30:04.924462   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.924474   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:04.924480   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:04.924543   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:04.972947   66919 cri.go:89] found id: ""
	I0815 01:30:04.972979   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.972991   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:04.972999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:04.973123   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:05.004748   66919 cri.go:89] found id: ""
	I0815 01:30:05.004772   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.004780   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:05.004785   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:05.004850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:05.036064   66919 cri.go:89] found id: ""
	I0815 01:30:05.036093   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.036103   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:05.036110   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:05.036174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:05.074397   66919 cri.go:89] found id: ""
	I0815 01:30:05.074430   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.074457   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:05.074467   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:05.074527   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:05.110796   66919 cri.go:89] found id: ""
	I0815 01:30:05.110821   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.110830   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:05.110836   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:05.110897   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:00.606670   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.875360613s)
	I0815 01:30:00.606701   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0815 01:30:00.606725   66492 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:30:00.606772   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:30:04.297747   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.690945823s)
	I0815 01:30:04.297780   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0815 01:30:04.297811   66492 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:30:04.297881   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:30:05.049009   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 01:30:05.049059   66492 cache_images.go:123] Successfully loaded all cached images
	I0815 01:30:05.049067   66492 cache_images.go:92] duration metric: took 15.041058069s to LoadCachedImages
	I0815 01:30:05.049083   66492 kubeadm.go:934] updating node { 192.168.61.166 8443 v1.31.0 crio true true} ...
	I0815 01:30:05.049215   66492 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-884893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:30:05.049295   66492 ssh_runner.go:195] Run: crio config
	I0815 01:30:05.101896   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:30:05.101915   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:30:05.101925   66492 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:30:05.101953   66492 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.166 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-884893 NodeName:no-preload-884893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:30:05.102129   66492 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.166
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-884893"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.166
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.166"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:30:05.102202   66492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:30:05.114396   66492 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:30:05.114464   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:30:05.124036   66492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0815 01:30:05.141411   66492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:30:05.156888   66492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0815 01:30:05.173630   66492 ssh_runner.go:195] Run: grep 192.168.61.166	control-plane.minikube.internal$ /etc/hosts
	I0815 01:30:05.177421   66492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.166	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:30:05.188839   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:30:02.313495   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:04.812529   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.252826   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:07.254206   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.753065   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.148938   66919 cri.go:89] found id: ""
	I0815 01:30:05.148960   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.148968   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:05.148976   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:05.148986   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:05.202523   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:05.202553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:05.215903   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:05.215935   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:05.294685   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:05.294709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:05.294724   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:05.397494   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:05.397529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:07.946734   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.967265   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:07.967341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:08.005761   66919 cri.go:89] found id: ""
	I0815 01:30:08.005792   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.005808   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:08.005814   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:08.005878   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:08.044124   66919 cri.go:89] found id: ""
	I0815 01:30:08.044154   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.044166   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:08.044173   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:08.044238   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:08.078729   66919 cri.go:89] found id: ""
	I0815 01:30:08.078757   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.078769   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:08.078777   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:08.078841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:08.121988   66919 cri.go:89] found id: ""
	I0815 01:30:08.122020   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.122035   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:08.122042   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:08.122108   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:08.156930   66919 cri.go:89] found id: ""
	I0815 01:30:08.156956   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.156964   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:08.156969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:08.157034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:08.201008   66919 cri.go:89] found id: ""
	I0815 01:30:08.201049   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.201060   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:08.201067   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:08.201128   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:08.241955   66919 cri.go:89] found id: ""
	I0815 01:30:08.241979   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.241987   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:08.241993   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:08.242041   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:08.277271   66919 cri.go:89] found id: ""
	I0815 01:30:08.277307   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.277317   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:08.277328   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:08.277343   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:08.339037   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:08.339082   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:08.355588   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:08.355617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:08.436131   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:08.436157   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:08.436170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:08.541231   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:08.541267   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:05.307306   66492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:30:05.326586   66492 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893 for IP: 192.168.61.166
	I0815 01:30:05.326606   66492 certs.go:194] generating shared ca certs ...
	I0815 01:30:05.326620   66492 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:30:05.326754   66492 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:30:05.326798   66492 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:30:05.326807   66492 certs.go:256] generating profile certs ...
	I0815 01:30:05.326885   66492 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.key
	I0815 01:30:05.326942   66492 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.key.2b09f8c1
	I0815 01:30:05.326975   66492 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.key
	I0815 01:30:05.327152   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:30:05.327216   66492 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:30:05.327231   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:30:05.327260   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:30:05.327292   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:30:05.327315   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:30:05.327353   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:30:05.328116   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:30:05.358988   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:30:05.386047   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:30:05.422046   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:30:05.459608   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 01:30:05.489226   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:30:05.518361   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:30:05.542755   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:30:05.567485   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:30:05.590089   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:30:05.614248   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:30:05.636932   66492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:30:05.652645   66492 ssh_runner.go:195] Run: openssl version
	I0815 01:30:05.658261   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:30:05.668530   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.673009   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.673091   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.678803   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:30:05.689237   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:30:05.699211   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.703378   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.703430   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.708890   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:30:05.718664   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:30:05.729058   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.733298   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.733352   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.738793   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:30:05.749007   66492 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:30:05.753780   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:30:05.759248   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:30:05.764978   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:30:05.770728   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:30:05.775949   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:30:05.781530   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:30:05.786881   66492 kubeadm.go:392] StartCluster: {Name:no-preload-884893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:30:05.786997   66492 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:30:05.787058   66492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:30:05.821591   66492 cri.go:89] found id: ""
	I0815 01:30:05.821662   66492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:30:05.832115   66492 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:30:05.832135   66492 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:30:05.832192   66492 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:30:05.841134   66492 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:30:05.842134   66492 kubeconfig.go:125] found "no-preload-884893" server: "https://192.168.61.166:8443"
	I0815 01:30:05.844248   66492 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:30:05.853112   66492 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.166
	I0815 01:30:05.853149   66492 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:30:05.853161   66492 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:30:05.853200   66492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:30:05.887518   66492 cri.go:89] found id: ""
	I0815 01:30:05.887591   66492 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:30:05.905394   66492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:30:05.914745   66492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:30:05.914763   66492 kubeadm.go:157] found existing configuration files:
	
	I0815 01:30:05.914812   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:30:05.924190   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:30:05.924244   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:30:05.933573   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:30:05.942352   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:30:05.942419   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:30:05.951109   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:30:05.959593   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:30:05.959656   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:30:05.968126   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:30:05.976084   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:30:05.976145   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:30:05.984770   66492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:30:05.993658   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:06.089280   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:06.949649   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.160787   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.231870   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.368542   66492 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:30:07.368644   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.868980   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:08.369588   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:08.395734   66492 api_server.go:72] duration metric: took 1.027190846s to wait for apiserver process to appear ...
	I0815 01:30:08.395760   66492 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:30:08.395782   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:07.313709   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.812159   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.394556   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.394591   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.394610   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.433312   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.433352   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.433366   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.450472   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.450507   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.895986   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.900580   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:30:11.900612   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:30:12.396449   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:12.402073   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:30:12.402097   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:30:12.896742   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:12.902095   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 200:
	ok
	I0815 01:30:12.909261   66492 api_server.go:141] control plane version: v1.31.0
	I0815 01:30:12.909292   66492 api_server.go:131] duration metric: took 4.513523262s to wait for apiserver health ...
	I0815 01:30:12.909304   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:30:12.909312   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:30:12.911002   66492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:30:12.252177   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:14.253401   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.090797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:11.105873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:11.105951   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:11.139481   66919 cri.go:89] found id: ""
	I0815 01:30:11.139509   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.139520   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:11.139528   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:11.139586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:11.176291   66919 cri.go:89] found id: ""
	I0815 01:30:11.176320   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.176329   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:11.176336   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:11.176408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:11.212715   66919 cri.go:89] found id: ""
	I0815 01:30:11.212750   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.212760   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:11.212766   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:11.212824   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:11.247283   66919 cri.go:89] found id: ""
	I0815 01:30:11.247311   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.247321   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:11.247328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:11.247391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:11.280285   66919 cri.go:89] found id: ""
	I0815 01:30:11.280319   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.280332   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:11.280339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:11.280407   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:11.317883   66919 cri.go:89] found id: ""
	I0815 01:30:11.317911   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.317930   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:11.317937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:11.317998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:11.355178   66919 cri.go:89] found id: ""
	I0815 01:30:11.355208   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.355220   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:11.355227   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:11.355287   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:11.390965   66919 cri.go:89] found id: ""
	I0815 01:30:11.390992   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.391004   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:11.391015   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:11.391030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:11.445967   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:11.446004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:11.460539   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:11.460570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:11.537022   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:11.537043   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:11.537058   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:11.625438   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:11.625476   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.175870   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:14.189507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:14.189576   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:14.225227   66919 cri.go:89] found id: ""
	I0815 01:30:14.225255   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.225264   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:14.225271   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:14.225350   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:14.260247   66919 cri.go:89] found id: ""
	I0815 01:30:14.260276   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.260286   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:14.260294   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:14.260364   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:14.295498   66919 cri.go:89] found id: ""
	I0815 01:30:14.295528   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.295538   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:14.295552   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:14.295617   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:14.334197   66919 cri.go:89] found id: ""
	I0815 01:30:14.334228   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.334239   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:14.334247   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:14.334308   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:14.376198   66919 cri.go:89] found id: ""
	I0815 01:30:14.376232   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.376244   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:14.376252   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:14.376313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:14.416711   66919 cri.go:89] found id: ""
	I0815 01:30:14.416744   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.416755   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:14.416763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:14.416823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:14.453890   66919 cri.go:89] found id: ""
	I0815 01:30:14.453917   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.453930   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:14.453952   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:14.454024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:14.497742   66919 cri.go:89] found id: ""
	I0815 01:30:14.497768   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.497776   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:14.497787   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:14.497803   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:14.511938   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:14.511980   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:14.583464   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:14.583490   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:14.583510   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:14.683497   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:14.683540   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.724290   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:14.724327   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:12.912470   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:30:12.924194   66492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:30:12.943292   66492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:30:12.957782   66492 system_pods.go:59] 8 kube-system pods found
	I0815 01:30:12.957825   66492 system_pods.go:61] "coredns-6f6b679f8f-flg2c" [637e4479-8f63-481a-b3d8-c5c4a35ca60a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:30:12.957836   66492 system_pods.go:61] "etcd-no-preload-884893" [f786f812-e4b8-41d4-bf09-1350fee38efb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:30:12.957848   66492 system_pods.go:61] "kube-apiserver-no-preload-884893" [128cfe47-3a25-4d2c-8869-0d2aafa69852] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:30:12.957859   66492 system_pods.go:61] "kube-controller-manager-no-preload-884893" [e1cce704-2092-4350-8b2d-a96b4cb90969] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:30:12.957870   66492 system_pods.go:61] "kube-proxy-l559z" [67d270af-bcf3-4c4a-a917-84a3b4477a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 01:30:12.957889   66492 system_pods.go:61] "kube-scheduler-no-preload-884893" [004b37a2-58c2-431d-b43e-de894b7fa8ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:30:12.957900   66492 system_pods.go:61] "metrics-server-6867b74b74-qnnqs" [397b72b1-60cb-41b6-88c4-cb0c3d9200da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:30:12.957909   66492 system_pods.go:61] "storage-provisioner" [bd489c40-fcf4-400d-af4c-913b511494e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 01:30:12.957919   66492 system_pods.go:74] duration metric: took 14.600496ms to wait for pod list to return data ...
	I0815 01:30:12.957934   66492 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:30:12.964408   66492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:30:12.964437   66492 node_conditions.go:123] node cpu capacity is 2
	I0815 01:30:12.964448   66492 node_conditions.go:105] duration metric: took 6.509049ms to run NodePressure ...
	I0815 01:30:12.964466   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:13.242145   66492 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:30:13.247986   66492 kubeadm.go:739] kubelet initialised
	I0815 01:30:13.248012   66492 kubeadm.go:740] duration metric: took 5.831891ms waiting for restarted kubelet to initialise ...
	I0815 01:30:13.248021   66492 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:30:13.254140   66492 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.260351   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.260378   66492 pod_ready.go:81] duration metric: took 6.20764ms for pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.260388   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.260408   66492 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.265440   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "etcd-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.265464   66492 pod_ready.go:81] duration metric: took 5.046431ms for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.265474   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "etcd-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.265481   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.271153   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "kube-apiserver-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.271173   66492 pod_ready.go:81] duration metric: took 5.686045ms for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.271181   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "kube-apiserver-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.271187   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.346976   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.347001   66492 pod_ready.go:81] duration metric: took 75.806932ms for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.347011   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.347018   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l559z" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.748456   66492 pod_ready.go:92] pod "kube-proxy-l559z" in "kube-system" namespace has status "Ready":"True"
	I0815 01:30:13.748480   66492 pod_ready.go:81] duration metric: took 401.453111ms for pod "kube-proxy-l559z" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.748491   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:11.812458   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:13.813405   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:16.752797   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:19.251123   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:17.277116   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:17.290745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:17.290825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:17.324477   66919 cri.go:89] found id: ""
	I0815 01:30:17.324505   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.324512   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:17.324517   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:17.324573   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:17.356340   66919 cri.go:89] found id: ""
	I0815 01:30:17.356373   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.356384   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:17.356392   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:17.356452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:17.392696   66919 cri.go:89] found id: ""
	I0815 01:30:17.392722   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.392732   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:17.392740   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:17.392802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:17.425150   66919 cri.go:89] found id: ""
	I0815 01:30:17.425182   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.425192   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:17.425200   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:17.425266   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:17.460679   66919 cri.go:89] found id: ""
	I0815 01:30:17.460708   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.460720   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:17.460727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:17.460805   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:17.496881   66919 cri.go:89] found id: ""
	I0815 01:30:17.496914   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.496927   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:17.496933   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:17.496985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:17.528614   66919 cri.go:89] found id: ""
	I0815 01:30:17.528643   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.528668   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:17.528676   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:17.528736   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:17.563767   66919 cri.go:89] found id: ""
	I0815 01:30:17.563792   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.563799   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:17.563809   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:17.563824   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:17.576591   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:17.576619   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:17.647791   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:17.647819   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:17.647832   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:17.722889   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:17.722927   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:17.761118   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:17.761154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:15.756386   66492 pod_ready.go:102] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.255794   66492 pod_ready.go:102] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:19.754538   66492 pod_ready.go:92] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:30:19.754560   66492 pod_ready.go:81] duration metric: took 6.006061814s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:19.754569   66492 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:16.313295   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.313960   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:21.252528   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.753406   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:20.316550   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:20.329377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:20.329452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:20.361773   66919 cri.go:89] found id: ""
	I0815 01:30:20.361805   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.361814   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:20.361820   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:20.361880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:20.394091   66919 cri.go:89] found id: ""
	I0815 01:30:20.394127   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.394138   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:20.394145   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:20.394210   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:20.426882   66919 cri.go:89] found id: ""
	I0815 01:30:20.426910   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.426929   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:20.426937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:20.426998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:20.460629   66919 cri.go:89] found id: ""
	I0815 01:30:20.460678   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.460692   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:20.460699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:20.460764   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:20.492030   66919 cri.go:89] found id: ""
	I0815 01:30:20.492055   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.492063   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:20.492069   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:20.492127   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:20.523956   66919 cri.go:89] found id: ""
	I0815 01:30:20.523986   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.523994   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:20.523999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:20.524058   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:20.556577   66919 cri.go:89] found id: ""
	I0815 01:30:20.556606   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.556617   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:20.556633   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:20.556714   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:20.589322   66919 cri.go:89] found id: ""
	I0815 01:30:20.589357   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.589366   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:20.589374   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:20.589386   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:20.666950   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:20.666993   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:20.703065   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:20.703104   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:20.758120   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:20.758154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:20.773332   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:20.773378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:20.839693   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.340487   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:23.352978   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:23.353034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:23.386376   66919 cri.go:89] found id: ""
	I0815 01:30:23.386401   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.386411   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:23.386418   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:23.386480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:23.422251   66919 cri.go:89] found id: ""
	I0815 01:30:23.422275   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.422283   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:23.422288   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:23.422347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:23.454363   66919 cri.go:89] found id: ""
	I0815 01:30:23.454394   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.454405   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:23.454410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:23.454471   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:23.487211   66919 cri.go:89] found id: ""
	I0815 01:30:23.487240   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.487249   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:23.487255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:23.487313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:23.518655   66919 cri.go:89] found id: ""
	I0815 01:30:23.518680   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.518690   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:23.518695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:23.518749   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:23.553449   66919 cri.go:89] found id: ""
	I0815 01:30:23.553479   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.553489   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:23.553497   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:23.553549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:23.582407   66919 cri.go:89] found id: ""
	I0815 01:30:23.582443   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.582459   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:23.582466   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:23.582519   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:23.612805   66919 cri.go:89] found id: ""
	I0815 01:30:23.612839   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.612849   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:23.612861   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:23.612874   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:23.661661   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:23.661691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:23.674456   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:23.674491   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:23.742734   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.742758   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:23.742772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:23.828791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:23.828830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:21.761680   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.763406   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:20.812796   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.312044   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:25.312289   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:26.252305   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:28.752410   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:26.364924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:26.378354   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:26.378422   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:26.410209   66919 cri.go:89] found id: ""
	I0815 01:30:26.410238   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.410248   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:26.410253   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:26.410299   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:26.443885   66919 cri.go:89] found id: ""
	I0815 01:30:26.443918   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.443929   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:26.443935   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:26.443985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:26.475786   66919 cri.go:89] found id: ""
	I0815 01:30:26.475815   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.475826   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:26.475833   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:26.475898   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:26.510635   66919 cri.go:89] found id: ""
	I0815 01:30:26.510660   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.510669   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:26.510677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:26.510739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:26.542755   66919 cri.go:89] found id: ""
	I0815 01:30:26.542779   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.542787   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:26.542792   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:26.542842   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:26.574825   66919 cri.go:89] found id: ""
	I0815 01:30:26.574896   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.574911   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:26.574919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:26.574979   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:26.612952   66919 cri.go:89] found id: ""
	I0815 01:30:26.612980   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.612991   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:26.612998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:26.613067   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:26.645339   66919 cri.go:89] found id: ""
	I0815 01:30:26.645377   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.645388   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:26.645398   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:26.645415   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:26.659206   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:26.659243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:26.727526   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:26.727552   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:26.727569   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.811277   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:26.811314   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:26.851236   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:26.851270   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.402571   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:29.415017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:29.415095   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:29.448130   66919 cri.go:89] found id: ""
	I0815 01:30:29.448151   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.448159   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:29.448164   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:29.448213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:29.484156   66919 cri.go:89] found id: ""
	I0815 01:30:29.484186   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.484195   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:29.484200   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:29.484248   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:29.519760   66919 cri.go:89] found id: ""
	I0815 01:30:29.519796   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.519806   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:29.519812   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:29.519864   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:29.551336   66919 cri.go:89] found id: ""
	I0815 01:30:29.551363   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.551372   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:29.551377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:29.551428   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:29.584761   66919 cri.go:89] found id: ""
	I0815 01:30:29.584793   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.584804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:29.584811   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:29.584875   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:29.619310   66919 cri.go:89] found id: ""
	I0815 01:30:29.619335   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.619343   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:29.619351   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:29.619408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:29.653976   66919 cri.go:89] found id: ""
	I0815 01:30:29.654005   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.654016   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:29.654030   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:29.654104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:29.685546   66919 cri.go:89] found id: ""
	I0815 01:30:29.685581   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.685588   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:29.685598   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:29.685613   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:29.720766   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:29.720797   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.771174   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:29.771207   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:29.783951   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:29.783979   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:29.853602   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:29.853622   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:29.853634   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.259774   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:28.260345   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:27.312379   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:29.312991   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:31.253803   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:33.752012   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.434032   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:32.447831   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:32.447900   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:32.479056   66919 cri.go:89] found id: ""
	I0815 01:30:32.479086   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.479096   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:32.479102   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:32.479167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:32.511967   66919 cri.go:89] found id: ""
	I0815 01:30:32.512002   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.512014   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:32.512022   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:32.512094   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:32.547410   66919 cri.go:89] found id: ""
	I0815 01:30:32.547433   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.547441   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:32.547446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:32.547494   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:32.580829   66919 cri.go:89] found id: ""
	I0815 01:30:32.580857   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.580867   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:32.580874   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:32.580941   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:32.613391   66919 cri.go:89] found id: ""
	I0815 01:30:32.613502   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.613518   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:32.613529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:32.613619   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:32.645703   66919 cri.go:89] found id: ""
	I0815 01:30:32.645736   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.645747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:32.645754   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:32.645822   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:32.677634   66919 cri.go:89] found id: ""
	I0815 01:30:32.677667   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.677678   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:32.677685   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:32.677740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:32.708400   66919 cri.go:89] found id: ""
	I0815 01:30:32.708481   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.708506   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:32.708521   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:32.708538   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:32.759869   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:32.759907   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:32.773110   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:32.773131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:32.840010   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:32.840031   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:32.840045   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:32.915894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:32.915948   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:30.261620   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.760735   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:34.761802   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:31.813543   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:33.813715   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.752452   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:37.752484   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.752536   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.461001   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:35.473803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:35.473874   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:35.506510   66919 cri.go:89] found id: ""
	I0815 01:30:35.506532   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.506540   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:35.506546   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:35.506593   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:35.540988   66919 cri.go:89] found id: ""
	I0815 01:30:35.541018   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.541028   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:35.541033   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:35.541084   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:35.575687   66919 cri.go:89] found id: ""
	I0815 01:30:35.575713   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.575723   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:35.575730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:35.575789   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:35.606841   66919 cri.go:89] found id: ""
	I0815 01:30:35.606871   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.606878   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:35.606884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:35.606940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:35.641032   66919 cri.go:89] found id: ""
	I0815 01:30:35.641067   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.641079   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:35.641086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:35.641150   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:35.676347   66919 cri.go:89] found id: ""
	I0815 01:30:35.676381   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.676422   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:35.676433   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:35.676497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:35.713609   66919 cri.go:89] found id: ""
	I0815 01:30:35.713634   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.713648   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:35.713655   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:35.713739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:35.751057   66919 cri.go:89] found id: ""
	I0815 01:30:35.751083   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.751094   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:35.751104   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:35.751119   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:35.822909   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:35.822935   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:35.822950   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:35.904146   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:35.904186   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:35.942285   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:35.942316   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:35.990920   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:35.990959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.504900   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:38.518230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:38.518301   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:38.552402   66919 cri.go:89] found id: ""
	I0815 01:30:38.552428   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.552436   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:38.552441   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:38.552500   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:38.588617   66919 cri.go:89] found id: ""
	I0815 01:30:38.588643   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.588668   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:38.588677   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:38.588740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:38.621168   66919 cri.go:89] found id: ""
	I0815 01:30:38.621196   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.621204   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:38.621210   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:38.621258   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:38.654522   66919 cri.go:89] found id: ""
	I0815 01:30:38.654550   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.654559   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:38.654565   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:38.654631   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:38.688710   66919 cri.go:89] found id: ""
	I0815 01:30:38.688735   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.688743   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:38.688748   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:38.688802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:38.720199   66919 cri.go:89] found id: ""
	I0815 01:30:38.720224   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.720235   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:38.720242   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:38.720304   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:38.753996   66919 cri.go:89] found id: ""
	I0815 01:30:38.754026   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.754036   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:38.754043   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:38.754102   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:38.787488   66919 cri.go:89] found id: ""
	I0815 01:30:38.787514   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.787522   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:38.787530   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:38.787542   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:38.840062   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:38.840092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.854501   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:38.854543   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:38.933715   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:38.933749   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:38.933766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:39.010837   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:39.010871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:37.260918   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.263490   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.816265   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:38.313383   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:42.252613   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:44.751882   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:41.552027   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:41.566058   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:41.566136   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:41.603076   66919 cri.go:89] found id: ""
	I0815 01:30:41.603110   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.603123   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:41.603132   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:41.603201   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:41.637485   66919 cri.go:89] found id: ""
	I0815 01:30:41.637524   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.637536   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:41.637543   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:41.637609   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:41.671313   66919 cri.go:89] found id: ""
	I0815 01:30:41.671337   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.671345   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:41.671350   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:41.671399   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:41.704715   66919 cri.go:89] found id: ""
	I0815 01:30:41.704741   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.704752   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:41.704759   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:41.704821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:41.736357   66919 cri.go:89] found id: ""
	I0815 01:30:41.736388   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.736398   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:41.736405   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:41.736465   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:41.770373   66919 cri.go:89] found id: ""
	I0815 01:30:41.770401   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.770409   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:41.770415   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:41.770463   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:41.805965   66919 cri.go:89] found id: ""
	I0815 01:30:41.805990   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.805998   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:41.806003   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:41.806054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:41.841753   66919 cri.go:89] found id: ""
	I0815 01:30:41.841778   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.841786   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:41.841794   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:41.841805   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:41.914515   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:41.914539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:41.914557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:41.988345   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:41.988380   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:42.023814   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:42.023841   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:42.075210   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:42.075243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.589738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:44.602604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:44.602663   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:44.634203   66919 cri.go:89] found id: ""
	I0815 01:30:44.634236   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.634247   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:44.634254   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:44.634341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:44.683449   66919 cri.go:89] found id: ""
	I0815 01:30:44.683480   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.683490   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:44.683495   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:44.683563   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:44.716485   66919 cri.go:89] found id: ""
	I0815 01:30:44.716509   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.716520   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:44.716527   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:44.716595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:44.755708   66919 cri.go:89] found id: ""
	I0815 01:30:44.755737   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.755746   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:44.755755   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:44.755823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:44.791754   66919 cri.go:89] found id: ""
	I0815 01:30:44.791781   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.791790   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:44.791796   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:44.791867   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:44.825331   66919 cri.go:89] found id: ""
	I0815 01:30:44.825355   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.825363   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:44.825369   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:44.825416   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:44.861680   66919 cri.go:89] found id: ""
	I0815 01:30:44.861705   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.861713   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:44.861718   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:44.861770   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:44.898810   66919 cri.go:89] found id: ""
	I0815 01:30:44.898844   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.898857   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:44.898867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:44.898881   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:44.949416   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:44.949449   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.964230   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:44.964258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:45.038989   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:45.039012   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:45.039027   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:45.116311   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:45.116345   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:41.760941   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:43.764802   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:40.811825   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:42.813489   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:45.312497   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:46.753090   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:49.252535   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.658176   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:47.671312   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:47.671375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:47.705772   66919 cri.go:89] found id: ""
	I0815 01:30:47.705800   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.705812   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:47.705819   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:47.705882   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:47.737812   66919 cri.go:89] found id: ""
	I0815 01:30:47.737846   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.737857   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:47.737864   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:47.737928   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:47.773079   66919 cri.go:89] found id: ""
	I0815 01:30:47.773103   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.773114   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:47.773121   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:47.773184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:47.804941   66919 cri.go:89] found id: ""
	I0815 01:30:47.804970   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.804980   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:47.804990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:47.805053   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:47.841215   66919 cri.go:89] found id: ""
	I0815 01:30:47.841249   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.841260   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:47.841266   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:47.841322   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:47.872730   66919 cri.go:89] found id: ""
	I0815 01:30:47.872761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.872772   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:47.872780   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:47.872833   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:47.905731   66919 cri.go:89] found id: ""
	I0815 01:30:47.905761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.905769   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:47.905774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:47.905825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:47.939984   66919 cri.go:89] found id: ""
	I0815 01:30:47.940017   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.940028   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:47.940040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:47.940053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:47.989493   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:47.989526   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:48.002567   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:48.002605   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:48.066691   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:48.066709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:48.066720   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:48.142512   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:48.142551   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:46.260920   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:48.761706   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.813316   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:50.311266   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:51.253220   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.751360   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:50.681288   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:50.695289   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:50.695358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:50.729264   66919 cri.go:89] found id: ""
	I0815 01:30:50.729293   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.729303   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:50.729310   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:50.729374   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:50.765308   66919 cri.go:89] found id: ""
	I0815 01:30:50.765337   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.765348   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:50.765354   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:50.765421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:50.801332   66919 cri.go:89] found id: ""
	I0815 01:30:50.801362   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.801382   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:50.801391   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:50.801452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:50.834822   66919 cri.go:89] found id: ""
	I0815 01:30:50.834855   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.834866   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:50.834873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:50.834937   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:50.868758   66919 cri.go:89] found id: ""
	I0815 01:30:50.868785   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.868804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:50.868817   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:50.868886   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:50.902003   66919 cri.go:89] found id: ""
	I0815 01:30:50.902035   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.902046   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:50.902053   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:50.902113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:50.934517   66919 cri.go:89] found id: ""
	I0815 01:30:50.934546   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.934562   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:50.934569   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:50.934628   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:50.968195   66919 cri.go:89] found id: ""
	I0815 01:30:50.968224   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.968233   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:50.968244   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:50.968258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:51.019140   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:51.019176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:51.032046   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:51.032072   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:51.109532   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:51.109555   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:51.109571   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:51.186978   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:51.187021   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:53.734145   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:53.747075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:53.747146   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:53.779774   66919 cri.go:89] found id: ""
	I0815 01:30:53.779800   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.779807   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:53.779812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:53.779861   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:53.813079   66919 cri.go:89] found id: ""
	I0815 01:30:53.813119   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.813130   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:53.813137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:53.813198   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:53.847148   66919 cri.go:89] found id: ""
	I0815 01:30:53.847179   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.847188   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:53.847195   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:53.847261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:53.880562   66919 cri.go:89] found id: ""
	I0815 01:30:53.880589   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.880596   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:53.880604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:53.880666   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:53.913334   66919 cri.go:89] found id: ""
	I0815 01:30:53.913364   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.913372   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:53.913378   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:53.913436   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:53.946008   66919 cri.go:89] found id: ""
	I0815 01:30:53.946042   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.946052   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:53.946057   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:53.946111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:53.978557   66919 cri.go:89] found id: ""
	I0815 01:30:53.978586   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.978595   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:53.978600   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:53.978653   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:54.010358   66919 cri.go:89] found id: ""
	I0815 01:30:54.010385   66919 logs.go:276] 0 containers: []
	W0815 01:30:54.010392   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:54.010401   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:54.010413   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:54.059780   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:54.059815   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:54.073397   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:54.073428   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:54.140996   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:54.141024   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:54.141039   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:54.215401   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:54.215437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:51.261078   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.261318   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:52.315214   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:54.813501   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:55.751557   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.766434   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:56.756848   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:56.769371   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:56.769434   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:56.806021   66919 cri.go:89] found id: ""
	I0815 01:30:56.806046   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.806076   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:56.806100   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:56.806170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:56.855347   66919 cri.go:89] found id: ""
	I0815 01:30:56.855377   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.855393   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:56.855400   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:56.855464   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:56.898669   66919 cri.go:89] found id: ""
	I0815 01:30:56.898700   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.898710   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:56.898717   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:56.898785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:56.955078   66919 cri.go:89] found id: ""
	I0815 01:30:56.955112   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.955124   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:56.955131   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:56.955205   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:56.987638   66919 cri.go:89] found id: ""
	I0815 01:30:56.987666   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.987674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:56.987680   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:56.987729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:57.019073   66919 cri.go:89] found id: ""
	I0815 01:30:57.019101   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.019109   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:57.019114   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:57.019170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:57.051695   66919 cri.go:89] found id: ""
	I0815 01:30:57.051724   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.051735   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:57.051742   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:57.051804   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:57.085066   66919 cri.go:89] found id: ""
	I0815 01:30:57.085095   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.085106   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:57.085117   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:57.085131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:57.134043   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:57.134080   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:57.147838   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:57.147871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:57.221140   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:57.221174   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:57.221190   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:57.302571   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:57.302607   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:59.841296   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:59.854638   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:59.854700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:59.885940   66919 cri.go:89] found id: ""
	I0815 01:30:59.885963   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.885971   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:59.885976   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:59.886026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:59.918783   66919 cri.go:89] found id: ""
	I0815 01:30:59.918812   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.918824   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:59.918832   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:59.918905   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:59.952122   66919 cri.go:89] found id: ""
	I0815 01:30:59.952153   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.952163   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:59.952169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:59.952233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:59.987303   66919 cri.go:89] found id: ""
	I0815 01:30:59.987331   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.987339   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:59.987344   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:59.987410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:00.024606   66919 cri.go:89] found id: ""
	I0815 01:31:00.024640   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.024666   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:00.024677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:00.024738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:00.055993   66919 cri.go:89] found id: ""
	I0815 01:31:00.056020   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.056031   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:00.056039   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:00.056104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:00.087128   66919 cri.go:89] found id: ""
	I0815 01:31:00.087161   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.087173   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:00.087180   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:00.087249   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:00.120436   66919 cri.go:89] found id: ""
	I0815 01:31:00.120465   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.120476   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:00.120488   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:00.120503   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:55.261504   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.762139   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.312874   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:59.811724   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.252248   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.751908   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.133810   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:00.133838   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:00.199949   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:00.199971   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:00.199984   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:00.284740   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:00.284778   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.321791   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:00.321827   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:02.873253   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:02.885846   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:02.885925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:02.924698   66919 cri.go:89] found id: ""
	I0815 01:31:02.924727   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.924739   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:02.924745   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:02.924807   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:02.961352   66919 cri.go:89] found id: ""
	I0815 01:31:02.961383   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.961391   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:02.961396   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:02.961450   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:02.996293   66919 cri.go:89] found id: ""
	I0815 01:31:02.996327   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.996334   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:02.996341   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:02.996391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:03.028976   66919 cri.go:89] found id: ""
	I0815 01:31:03.029005   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.029013   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:03.029019   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:03.029066   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:03.063388   66919 cri.go:89] found id: ""
	I0815 01:31:03.063425   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.063436   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:03.063445   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:03.063518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:03.099730   66919 cri.go:89] found id: ""
	I0815 01:31:03.099757   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.099767   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:03.099778   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:03.099841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:03.132347   66919 cri.go:89] found id: ""
	I0815 01:31:03.132370   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.132380   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:03.132386   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:03.132495   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:03.165120   66919 cri.go:89] found id: ""
	I0815 01:31:03.165146   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.165153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:03.165161   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:03.165173   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:03.217544   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:03.217576   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:03.232299   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:03.232341   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:03.297458   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:03.297484   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:03.297500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:03.377304   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:03.377338   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.261621   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.760996   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:04.762492   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:01.814111   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:04.311963   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.251139   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:07.252081   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:09.253611   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.915544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:05.929154   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:05.929231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:05.972008   66919 cri.go:89] found id: ""
	I0815 01:31:05.972037   66919 logs.go:276] 0 containers: []
	W0815 01:31:05.972048   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:05.972055   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:05.972119   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:06.005459   66919 cri.go:89] found id: ""
	I0815 01:31:06.005486   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.005494   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:06.005499   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:06.005550   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:06.037623   66919 cri.go:89] found id: ""
	I0815 01:31:06.037655   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.037666   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:06.037674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:06.037733   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:06.070323   66919 cri.go:89] found id: ""
	I0815 01:31:06.070347   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.070356   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:06.070361   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:06.070419   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:06.103570   66919 cri.go:89] found id: ""
	I0815 01:31:06.103593   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.103601   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:06.103606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:06.103654   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:06.136253   66919 cri.go:89] found id: ""
	I0815 01:31:06.136281   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.136291   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:06.136297   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:06.136356   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:06.170851   66919 cri.go:89] found id: ""
	I0815 01:31:06.170878   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.170890   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:06.170895   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:06.170942   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:06.205836   66919 cri.go:89] found id: ""
	I0815 01:31:06.205860   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.205867   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:06.205876   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:06.205892   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:06.282838   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:06.282872   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:06.323867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:06.323898   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:06.378187   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:06.378230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:06.393126   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:06.393160   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:06.460898   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:08.961182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:08.973963   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:08.974048   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:09.007466   66919 cri.go:89] found id: ""
	I0815 01:31:09.007494   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.007502   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:09.007509   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:09.007567   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:09.045097   66919 cri.go:89] found id: ""
	I0815 01:31:09.045123   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.045131   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:09.045137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:09.045187   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:09.078326   66919 cri.go:89] found id: ""
	I0815 01:31:09.078356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.078380   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:09.078389   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:09.078455   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:09.109430   66919 cri.go:89] found id: ""
	I0815 01:31:09.109460   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.109471   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:09.109478   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:09.109544   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:09.143200   66919 cri.go:89] found id: ""
	I0815 01:31:09.143225   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.143234   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:09.143239   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:09.143306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:09.179057   66919 cri.go:89] found id: ""
	I0815 01:31:09.179081   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.179089   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:09.179095   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:09.179141   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:09.213327   66919 cri.go:89] found id: ""
	I0815 01:31:09.213356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.213368   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:09.213375   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:09.213425   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:09.246716   66919 cri.go:89] found id: ""
	I0815 01:31:09.246745   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.246756   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:09.246763   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:09.246775   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:09.299075   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:09.299105   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:09.313023   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:09.313054   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:09.377521   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:09.377545   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:09.377557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:09.453791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:09.453830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:07.260671   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:09.261005   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:06.313082   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:08.812290   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.753344   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.251251   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.991473   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:12.004615   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:12.004707   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:12.045028   66919 cri.go:89] found id: ""
	I0815 01:31:12.045057   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.045066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:12.045072   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:12.045121   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:12.077887   66919 cri.go:89] found id: ""
	I0815 01:31:12.077910   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.077920   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:12.077926   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:12.077974   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:12.110214   66919 cri.go:89] found id: ""
	I0815 01:31:12.110249   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.110260   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:12.110268   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:12.110328   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:12.142485   66919 cri.go:89] found id: ""
	I0815 01:31:12.142509   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.142516   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:12.142522   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:12.142572   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:12.176921   66919 cri.go:89] found id: ""
	I0815 01:31:12.176951   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.176962   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:12.176969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:12.177030   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:12.212093   66919 cri.go:89] found id: ""
	I0815 01:31:12.212142   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.212154   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:12.212162   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:12.212216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:12.246980   66919 cri.go:89] found id: ""
	I0815 01:31:12.247007   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.247017   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:12.247024   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:12.247082   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:12.280888   66919 cri.go:89] found id: ""
	I0815 01:31:12.280918   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.280931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:12.280943   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:12.280959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:12.333891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:12.333923   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:12.346753   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:12.346783   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:12.415652   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:12.415675   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:12.415692   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:12.494669   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:12.494706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.031185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:15.044605   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:15.044704   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:15.081810   66919 cri.go:89] found id: ""
	I0815 01:31:15.081846   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.081860   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:15.081869   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:15.081932   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:15.113517   66919 cri.go:89] found id: ""
	I0815 01:31:15.113550   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.113562   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:15.113568   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:15.113641   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:11.762158   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.260892   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.314672   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:13.811754   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:16.751293   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.752458   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.147638   66919 cri.go:89] found id: ""
	I0815 01:31:15.147665   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.147673   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:15.147679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:15.147746   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:15.178938   66919 cri.go:89] found id: ""
	I0815 01:31:15.178966   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.178976   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:15.178990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:15.179054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:15.212304   66919 cri.go:89] found id: ""
	I0815 01:31:15.212333   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.212346   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:15.212353   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:15.212414   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:15.245991   66919 cri.go:89] found id: ""
	I0815 01:31:15.246012   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.246019   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:15.246025   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:15.246074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:15.280985   66919 cri.go:89] found id: ""
	I0815 01:31:15.281016   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.281034   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:15.281041   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:15.281105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:15.315902   66919 cri.go:89] found id: ""
	I0815 01:31:15.315939   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.315948   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:15.315958   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:15.315973   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:15.329347   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:15.329375   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:15.400366   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:15.400388   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:15.400405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:15.479074   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:15.479118   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.516204   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:15.516230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.070588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:18.083120   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:18.083196   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:18.115673   66919 cri.go:89] found id: ""
	I0815 01:31:18.115701   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.115709   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:18.115715   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:18.115772   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:18.147011   66919 cri.go:89] found id: ""
	I0815 01:31:18.147039   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.147047   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:18.147053   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:18.147126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:18.179937   66919 cri.go:89] found id: ""
	I0815 01:31:18.179960   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.179968   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:18.179973   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:18.180032   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:18.214189   66919 cri.go:89] found id: ""
	I0815 01:31:18.214216   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.214224   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:18.214230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:18.214289   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:18.252102   66919 cri.go:89] found id: ""
	I0815 01:31:18.252130   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.252137   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:18.252143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:18.252204   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:18.285481   66919 cri.go:89] found id: ""
	I0815 01:31:18.285519   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.285529   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:18.285536   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:18.285599   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:18.321609   66919 cri.go:89] found id: ""
	I0815 01:31:18.321636   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.321651   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:18.321660   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:18.321723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:18.352738   66919 cri.go:89] found id: ""
	I0815 01:31:18.352766   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.352774   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:18.352782   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:18.352796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.401481   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:18.401517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:18.414984   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:18.415016   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:18.485539   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:18.485559   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:18.485579   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:18.569611   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:18.569651   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:16.262086   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.760590   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.812958   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:17.813230   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:20.312988   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.255232   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.751939   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.109609   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:21.123972   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:21.124038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:21.157591   66919 cri.go:89] found id: ""
	I0815 01:31:21.157624   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.157636   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:21.157643   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:21.157700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:21.192506   66919 cri.go:89] found id: ""
	I0815 01:31:21.192535   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.192545   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:21.192552   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:21.192623   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:21.224873   66919 cri.go:89] found id: ""
	I0815 01:31:21.224901   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.224912   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:21.224919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:21.224980   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:21.258398   66919 cri.go:89] found id: ""
	I0815 01:31:21.258427   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.258438   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:21.258446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:21.258513   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:21.295754   66919 cri.go:89] found id: ""
	I0815 01:31:21.295781   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.295792   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:21.295799   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:21.295870   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:21.330174   66919 cri.go:89] found id: ""
	I0815 01:31:21.330195   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.330202   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:21.330207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:21.330255   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:21.364565   66919 cri.go:89] found id: ""
	I0815 01:31:21.364588   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.364596   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:21.364639   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:21.364717   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:21.397889   66919 cri.go:89] found id: ""
	I0815 01:31:21.397920   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.397931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:21.397942   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:21.397961   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.471788   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:21.471822   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:21.508837   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:21.508867   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:21.560538   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:21.560575   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:21.575581   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:21.575622   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:21.647798   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.148566   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:24.160745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:24.160813   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:24.192535   66919 cri.go:89] found id: ""
	I0815 01:31:24.192558   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.192566   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:24.192572   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:24.192630   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:24.223468   66919 cri.go:89] found id: ""
	I0815 01:31:24.223499   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.223507   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:24.223513   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:24.223561   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:24.258905   66919 cri.go:89] found id: ""
	I0815 01:31:24.258931   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.258938   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:24.258944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:24.259006   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:24.298914   66919 cri.go:89] found id: ""
	I0815 01:31:24.298942   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.298949   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:24.298955   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:24.299011   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:24.331962   66919 cri.go:89] found id: ""
	I0815 01:31:24.331992   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.332003   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:24.332011   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:24.332078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:24.365984   66919 cri.go:89] found id: ""
	I0815 01:31:24.366014   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.366022   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:24.366028   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:24.366078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:24.402397   66919 cri.go:89] found id: ""
	I0815 01:31:24.402432   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.402442   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:24.402450   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:24.402516   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:24.434662   66919 cri.go:89] found id: ""
	I0815 01:31:24.434691   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.434704   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:24.434714   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:24.434730   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:24.474087   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:24.474117   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:24.524494   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:24.524533   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:24.537770   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:24.537795   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:24.608594   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.608634   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:24.608650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.260845   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.260974   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:22.811939   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:24.812873   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:26.252688   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.751413   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.191588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:27.206339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:27.206421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:27.241277   66919 cri.go:89] found id: ""
	I0815 01:31:27.241306   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.241315   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:27.241321   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:27.241385   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:27.275952   66919 cri.go:89] found id: ""
	I0815 01:31:27.275983   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.275992   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:27.275998   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:27.276060   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:27.308320   66919 cri.go:89] found id: ""
	I0815 01:31:27.308348   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.308359   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:27.308366   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:27.308424   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:27.340957   66919 cri.go:89] found id: ""
	I0815 01:31:27.340987   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.340998   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:27.341007   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:27.341135   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:27.373078   66919 cri.go:89] found id: ""
	I0815 01:31:27.373102   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.373110   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:27.373117   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:27.373182   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:27.409250   66919 cri.go:89] found id: ""
	I0815 01:31:27.409277   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.409289   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:27.409296   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:27.409358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:27.444244   66919 cri.go:89] found id: ""
	I0815 01:31:27.444270   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.444280   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:27.444287   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:27.444360   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:27.482507   66919 cri.go:89] found id: ""
	I0815 01:31:27.482535   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.482543   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:27.482552   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:27.482570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:27.521896   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:27.521931   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:27.575404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:27.575437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:27.587713   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:27.587745   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:27.650431   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:27.650461   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:27.650475   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:25.761255   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.261210   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.312866   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:29.812673   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:30.752414   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:33.252178   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:30.228663   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:30.242782   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:30.242852   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:30.278385   66919 cri.go:89] found id: ""
	I0815 01:31:30.278410   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.278420   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:30.278428   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:30.278483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:30.316234   66919 cri.go:89] found id: ""
	I0815 01:31:30.316258   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.316268   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:30.316276   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:30.316335   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:30.348738   66919 cri.go:89] found id: ""
	I0815 01:31:30.348767   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.348778   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:30.348787   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:30.348851   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:30.380159   66919 cri.go:89] found id: ""
	I0815 01:31:30.380189   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.380201   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:30.380208   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:30.380261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:30.414888   66919 cri.go:89] found id: ""
	I0815 01:31:30.414911   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.414919   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:30.414924   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:30.414977   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:30.447898   66919 cri.go:89] found id: ""
	I0815 01:31:30.447923   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.447931   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:30.447937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:30.448024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:30.479148   66919 cri.go:89] found id: ""
	I0815 01:31:30.479177   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.479187   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:30.479193   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:30.479245   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:30.511725   66919 cri.go:89] found id: ""
	I0815 01:31:30.511752   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.511760   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:30.511768   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:30.511780   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:30.562554   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:30.562590   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:30.575869   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:30.575896   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:30.642642   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:30.642662   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:30.642675   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:30.734491   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:30.734530   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:33.276918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:33.289942   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:33.290010   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:33.322770   66919 cri.go:89] found id: ""
	I0815 01:31:33.322799   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.322806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:33.322813   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:33.322862   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:33.359474   66919 cri.go:89] found id: ""
	I0815 01:31:33.359503   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.359513   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:33.359520   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:33.359590   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:33.391968   66919 cri.go:89] found id: ""
	I0815 01:31:33.391996   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.392007   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:33.392014   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:33.392076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:33.423830   66919 cri.go:89] found id: ""
	I0815 01:31:33.423853   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.423861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:33.423866   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:33.423914   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:33.454991   66919 cri.go:89] found id: ""
	I0815 01:31:33.455014   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.455022   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:33.455027   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:33.455076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:33.492150   66919 cri.go:89] found id: ""
	I0815 01:31:33.492173   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.492181   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:33.492187   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:33.492236   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:33.525206   66919 cri.go:89] found id: ""
	I0815 01:31:33.525237   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.525248   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:33.525255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:33.525331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:33.558939   66919 cri.go:89] found id: ""
	I0815 01:31:33.558973   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.558984   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:33.558995   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:33.559011   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:33.616977   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:33.617029   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:33.629850   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:33.629879   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:33.698029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:33.698052   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:33.698069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:33.776609   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:33.776641   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:30.261492   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.761417   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.761672   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.315096   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.811837   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:35.751307   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:37.753280   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.320299   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:36.333429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:36.333492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:36.366810   66919 cri.go:89] found id: ""
	I0815 01:31:36.366846   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.366858   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:36.366866   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:36.366918   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:36.405898   66919 cri.go:89] found id: ""
	I0815 01:31:36.405930   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.405942   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:36.405949   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:36.406017   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:36.471396   66919 cri.go:89] found id: ""
	I0815 01:31:36.471432   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.471445   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:36.471453   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:36.471524   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:36.504319   66919 cri.go:89] found id: ""
	I0815 01:31:36.504355   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.504367   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:36.504373   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:36.504430   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:36.542395   66919 cri.go:89] found id: ""
	I0815 01:31:36.542423   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.542431   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:36.542437   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:36.542492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:36.576279   66919 cri.go:89] found id: ""
	I0815 01:31:36.576310   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.576320   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:36.576327   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:36.576391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:36.609215   66919 cri.go:89] found id: ""
	I0815 01:31:36.609243   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.609251   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:36.609256   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:36.609306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:36.641911   66919 cri.go:89] found id: ""
	I0815 01:31:36.641936   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.641944   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:36.641952   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:36.641964   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:36.691751   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:36.691784   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:36.704619   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:36.704644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:36.768328   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:36.768348   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:36.768360   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:36.843727   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:36.843759   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:39.381851   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:39.396205   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:39.396284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:39.430646   66919 cri.go:89] found id: ""
	I0815 01:31:39.430673   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.430681   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:39.430688   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:39.430751   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:39.468470   66919 cri.go:89] found id: ""
	I0815 01:31:39.468504   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.468517   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:39.468526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:39.468603   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:39.500377   66919 cri.go:89] found id: ""
	I0815 01:31:39.500407   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.500416   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:39.500423   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:39.500490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:39.532411   66919 cri.go:89] found id: ""
	I0815 01:31:39.532440   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.532447   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:39.532452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:39.532504   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:39.564437   66919 cri.go:89] found id: ""
	I0815 01:31:39.564463   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.564471   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:39.564476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:39.564528   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:39.598732   66919 cri.go:89] found id: ""
	I0815 01:31:39.598757   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.598765   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:39.598771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:39.598837   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:39.640429   66919 cri.go:89] found id: ""
	I0815 01:31:39.640457   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.640469   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:39.640476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:39.640536   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:39.672116   66919 cri.go:89] found id: ""
	I0815 01:31:39.672142   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.672151   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:39.672159   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:39.672171   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:39.721133   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:39.721170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:39.734024   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:39.734060   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:39.799465   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:39.799487   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:39.799501   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:39.880033   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:39.880068   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:37.263319   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:39.762708   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.812954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:39.312718   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:40.251411   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.252627   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.750964   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.421276   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:42.438699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:42.438760   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:42.473213   66919 cri.go:89] found id: ""
	I0815 01:31:42.473239   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.473246   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:42.473251   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:42.473311   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:42.509493   66919 cri.go:89] found id: ""
	I0815 01:31:42.509523   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.509533   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:42.509538   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:42.509594   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:42.543625   66919 cri.go:89] found id: ""
	I0815 01:31:42.543649   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.543659   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:42.543665   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:42.543731   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:42.581756   66919 cri.go:89] found id: ""
	I0815 01:31:42.581784   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.581794   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:42.581801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:42.581865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:42.615595   66919 cri.go:89] found id: ""
	I0815 01:31:42.615618   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.615626   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:42.615631   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:42.615689   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:42.652938   66919 cri.go:89] found id: ""
	I0815 01:31:42.652961   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.652973   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:42.652979   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:42.653026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:42.689362   66919 cri.go:89] found id: ""
	I0815 01:31:42.689391   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.689399   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:42.689406   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:42.689460   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:42.725880   66919 cri.go:89] found id: ""
	I0815 01:31:42.725903   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.725911   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:42.725920   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:42.725932   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:42.798531   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:42.798553   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:42.798567   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:42.878583   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:42.878617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:42.916218   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:42.916245   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:42.968613   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:42.968650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:42.260936   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.262272   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:41.315219   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:43.812950   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:46.751554   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.752369   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.482622   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:45.494847   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:45.494917   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:45.526105   66919 cri.go:89] found id: ""
	I0815 01:31:45.526130   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.526139   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:45.526145   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:45.526195   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:45.558218   66919 cri.go:89] found id: ""
	I0815 01:31:45.558247   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.558258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:45.558265   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:45.558327   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:45.589922   66919 cri.go:89] found id: ""
	I0815 01:31:45.589950   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.589961   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:45.589969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:45.590037   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:45.622639   66919 cri.go:89] found id: ""
	I0815 01:31:45.622670   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.622685   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:45.622690   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:45.622740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:45.659274   66919 cri.go:89] found id: ""
	I0815 01:31:45.659301   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.659309   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:45.659314   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:45.659362   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:45.690768   66919 cri.go:89] found id: ""
	I0815 01:31:45.690795   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.690804   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:45.690810   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:45.690860   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:45.726862   66919 cri.go:89] found id: ""
	I0815 01:31:45.726885   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.726892   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:45.726898   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:45.726943   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:45.761115   66919 cri.go:89] found id: ""
	I0815 01:31:45.761142   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.761153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:45.761164   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:45.761179   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:45.774290   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:45.774335   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:45.843029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:45.843053   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:45.843069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:45.918993   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:45.919032   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:45.955647   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:45.955685   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.506376   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:48.518173   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:48.518234   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:48.550773   66919 cri.go:89] found id: ""
	I0815 01:31:48.550798   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.550806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:48.550812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:48.550865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:48.582398   66919 cri.go:89] found id: ""
	I0815 01:31:48.582431   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.582442   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:48.582449   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:48.582512   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:48.613989   66919 cri.go:89] found id: ""
	I0815 01:31:48.614023   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.614036   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:48.614045   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:48.614114   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:48.645269   66919 cri.go:89] found id: ""
	I0815 01:31:48.645306   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.645317   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:48.645326   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:48.645394   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:48.680588   66919 cri.go:89] found id: ""
	I0815 01:31:48.680615   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.680627   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:48.680636   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:48.680723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:48.719580   66919 cri.go:89] found id: ""
	I0815 01:31:48.719607   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.719615   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:48.719621   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:48.719684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:48.756573   66919 cri.go:89] found id: ""
	I0815 01:31:48.756597   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.756606   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:48.756613   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:48.756684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:48.793983   66919 cri.go:89] found id: ""
	I0815 01:31:48.794018   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.794029   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:48.794040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:48.794053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.847776   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:48.847811   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:48.870731   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:48.870762   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:48.960519   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:48.960548   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:48.960565   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:49.037502   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:49.037535   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:46.761461   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.761907   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.813203   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.313262   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:51.251455   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:53.252808   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:51.576022   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:51.589531   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:51.589595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:51.623964   66919 cri.go:89] found id: ""
	I0815 01:31:51.623991   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.624000   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:51.624008   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:51.624074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:51.657595   66919 cri.go:89] found id: ""
	I0815 01:31:51.657618   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.657626   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:51.657632   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:51.657681   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:51.692462   66919 cri.go:89] found id: ""
	I0815 01:31:51.692490   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.692501   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:51.692507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:51.692570   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:51.724210   66919 cri.go:89] found id: ""
	I0815 01:31:51.724249   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.724259   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:51.724267   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:51.724329   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:51.756450   66919 cri.go:89] found id: ""
	I0815 01:31:51.756476   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.756486   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:51.756493   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:51.756555   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:51.789082   66919 cri.go:89] found id: ""
	I0815 01:31:51.789114   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.789126   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:51.789133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:51.789183   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:51.822390   66919 cri.go:89] found id: ""
	I0815 01:31:51.822420   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.822431   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:51.822438   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:51.822491   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:51.855977   66919 cri.go:89] found id: ""
	I0815 01:31:51.856004   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.856014   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:51.856025   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:51.856040   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:51.904470   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:51.904500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:51.918437   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:51.918466   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:51.991742   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:51.991770   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:51.991785   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:52.065894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:52.065926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:54.602000   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:54.616388   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:54.616466   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:54.675750   66919 cri.go:89] found id: ""
	I0815 01:31:54.675779   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.675793   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:54.675802   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:54.675857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:54.710581   66919 cri.go:89] found id: ""
	I0815 01:31:54.710609   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.710620   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:54.710627   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:54.710691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:54.747267   66919 cri.go:89] found id: ""
	I0815 01:31:54.747304   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.747316   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:54.747325   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:54.747387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:54.784175   66919 cri.go:89] found id: ""
	I0815 01:31:54.784209   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.784221   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:54.784230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:54.784295   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:54.820360   66919 cri.go:89] found id: ""
	I0815 01:31:54.820395   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.820405   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:54.820412   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:54.820480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:54.853176   66919 cri.go:89] found id: ""
	I0815 01:31:54.853204   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.853214   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:54.853222   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:54.853281   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:54.886063   66919 cri.go:89] found id: ""
	I0815 01:31:54.886092   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.886105   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:54.886112   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:54.886171   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:54.919495   66919 cri.go:89] found id: ""
	I0815 01:31:54.919529   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.919540   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:54.919558   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:54.919574   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:54.973177   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:54.973213   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:54.986864   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:54.986899   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:55.052637   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:55.052685   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:55.052700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:51.260314   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:53.261883   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:50.812208   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:52.812356   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:54.812990   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:55.750709   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.751319   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.752400   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:55.133149   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:55.133180   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:57.672833   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:57.686035   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:57.686099   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:57.718612   66919 cri.go:89] found id: ""
	I0815 01:31:57.718641   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.718653   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:57.718661   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:57.718738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:57.752763   66919 cri.go:89] found id: ""
	I0815 01:31:57.752781   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.752788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:57.752793   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:57.752840   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:57.785667   66919 cri.go:89] found id: ""
	I0815 01:31:57.785697   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.785709   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:57.785716   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:57.785776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:57.818775   66919 cri.go:89] found id: ""
	I0815 01:31:57.818804   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.818813   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:57.818821   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:57.818881   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:57.853766   66919 cri.go:89] found id: ""
	I0815 01:31:57.853798   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.853809   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:57.853815   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:57.853880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:57.886354   66919 cri.go:89] found id: ""
	I0815 01:31:57.886379   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.886386   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:57.886392   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:57.886453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:57.920522   66919 cri.go:89] found id: ""
	I0815 01:31:57.920553   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.920576   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:57.920583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:57.920648   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:57.952487   66919 cri.go:89] found id: ""
	I0815 01:31:57.952511   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.952520   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:57.952528   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:57.952541   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:58.003026   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:58.003064   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:58.016516   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:58.016544   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:58.091434   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:58.091459   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:58.091500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:58.170038   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:58.170073   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:55.760430   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.760719   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.761206   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.313073   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.812268   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:02.252033   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:04.252260   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:00.709797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:00.724086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:00.724162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:00.756025   66919 cri.go:89] found id: ""
	I0815 01:32:00.756056   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.756066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:00.756073   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:00.756130   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:00.787831   66919 cri.go:89] found id: ""
	I0815 01:32:00.787858   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.787870   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:00.787880   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:00.787940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:00.821605   66919 cri.go:89] found id: ""
	I0815 01:32:00.821637   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.821644   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:00.821649   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:00.821697   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:00.852708   66919 cri.go:89] found id: ""
	I0815 01:32:00.852732   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.852739   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:00.852745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:00.852790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:00.885392   66919 cri.go:89] found id: ""
	I0815 01:32:00.885426   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.885437   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:00.885446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:00.885506   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:00.916715   66919 cri.go:89] found id: ""
	I0815 01:32:00.916751   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.916763   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:00.916771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:00.916890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:00.949028   66919 cri.go:89] found id: ""
	I0815 01:32:00.949058   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.949069   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:00.949076   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:00.949137   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:00.986364   66919 cri.go:89] found id: ""
	I0815 01:32:00.986399   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.986409   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:00.986419   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:00.986433   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.036475   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:01.036517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:01.049711   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:01.049746   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:01.117283   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:01.117310   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:01.117328   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:01.195453   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:01.195492   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:03.732372   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:03.745944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:03.746005   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:03.780527   66919 cri.go:89] found id: ""
	I0815 01:32:03.780566   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.780578   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:03.780586   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:03.780647   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:03.814147   66919 cri.go:89] found id: ""
	I0815 01:32:03.814170   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.814177   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:03.814184   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:03.814267   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:03.847375   66919 cri.go:89] found id: ""
	I0815 01:32:03.847409   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.847422   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:03.847429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:03.847497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:03.882859   66919 cri.go:89] found id: ""
	I0815 01:32:03.882887   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.882897   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:03.882904   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:03.882972   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:03.916490   66919 cri.go:89] found id: ""
	I0815 01:32:03.916520   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.916528   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:03.916544   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:03.916613   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:03.954789   66919 cri.go:89] found id: ""
	I0815 01:32:03.954819   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.954836   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:03.954844   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:03.954907   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:03.987723   66919 cri.go:89] found id: ""
	I0815 01:32:03.987748   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.987756   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:03.987761   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:03.987810   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:04.020948   66919 cri.go:89] found id: ""
	I0815 01:32:04.020974   66919 logs.go:276] 0 containers: []
	W0815 01:32:04.020981   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:04.020990   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:04.021008   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:04.033466   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:04.033489   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:04.097962   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:04.097989   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:04.098006   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:04.174672   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:04.174706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:04.216198   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:04.216228   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.761354   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:03.762268   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:02.313003   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:04.812280   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.751582   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:08.752387   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.768102   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:06.782370   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:06.782473   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:06.815958   66919 cri.go:89] found id: ""
	I0815 01:32:06.815983   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.815992   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:06.815999   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:06.816059   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:06.848701   66919 cri.go:89] found id: ""
	I0815 01:32:06.848735   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.848748   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:06.848756   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:06.848821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:06.879506   66919 cri.go:89] found id: ""
	I0815 01:32:06.879536   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.879544   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:06.879550   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:06.879607   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:06.915332   66919 cri.go:89] found id: ""
	I0815 01:32:06.915359   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.915371   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:06.915377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:06.915438   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:06.949424   66919 cri.go:89] found id: ""
	I0815 01:32:06.949454   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.949464   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:06.949471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:06.949518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:06.983713   66919 cri.go:89] found id: ""
	I0815 01:32:06.983739   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.983747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:06.983753   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:06.983816   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:07.016165   66919 cri.go:89] found id: ""
	I0815 01:32:07.016196   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.016207   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:07.016214   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:07.016271   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:07.048368   66919 cri.go:89] found id: ""
	I0815 01:32:07.048399   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.048410   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:07.048420   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:07.048435   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:07.100088   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:07.100128   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:07.113430   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:07.113459   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:07.178199   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:07.178223   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:07.178239   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:07.265089   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:07.265121   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:09.804733   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:09.819456   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:09.819530   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:09.850946   66919 cri.go:89] found id: ""
	I0815 01:32:09.850974   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.850981   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:09.850986   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:09.851043   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:09.888997   66919 cri.go:89] found id: ""
	I0815 01:32:09.889028   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.889039   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:09.889045   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:09.889105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:09.921455   66919 cri.go:89] found id: ""
	I0815 01:32:09.921490   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.921503   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:09.921511   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:09.921587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:09.957365   66919 cri.go:89] found id: ""
	I0815 01:32:09.957394   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.957410   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:09.957417   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:09.957477   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:09.988716   66919 cri.go:89] found id: ""
	I0815 01:32:09.988740   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.988753   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:09.988760   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:09.988823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:10.024121   66919 cri.go:89] found id: ""
	I0815 01:32:10.024148   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.024155   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:10.024160   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:10.024208   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:10.056210   66919 cri.go:89] found id: ""
	I0815 01:32:10.056237   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.056247   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:10.056253   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:10.056314   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:10.087519   66919 cri.go:89] found id: ""
	I0815 01:32:10.087551   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.087562   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:10.087574   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:10.087589   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:06.260821   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:08.760889   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.813185   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:09.312608   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:11.251168   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.252911   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:10.142406   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:10.142446   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:10.156134   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:10.156176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:10.230397   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:10.230419   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:10.230432   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:10.315187   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:10.315221   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:12.852055   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:12.864410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:12.864479   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:12.895777   66919 cri.go:89] found id: ""
	I0815 01:32:12.895811   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.895821   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:12.895831   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:12.895902   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:12.928135   66919 cri.go:89] found id: ""
	I0815 01:32:12.928161   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.928171   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:12.928178   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:12.928244   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:12.961837   66919 cri.go:89] found id: ""
	I0815 01:32:12.961867   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.961878   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:12.961885   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:12.961947   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:12.997899   66919 cri.go:89] found id: ""
	I0815 01:32:12.997928   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.997939   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:12.997946   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:12.998008   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:13.032686   66919 cri.go:89] found id: ""
	I0815 01:32:13.032716   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.032725   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:13.032730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:13.032783   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:13.064395   66919 cri.go:89] found id: ""
	I0815 01:32:13.064431   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.064444   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:13.064452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:13.064522   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:13.103618   66919 cri.go:89] found id: ""
	I0815 01:32:13.103646   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.103655   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:13.103661   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:13.103711   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:13.137650   66919 cri.go:89] found id: ""
	I0815 01:32:13.137684   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.137694   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:13.137702   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:13.137715   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:13.189803   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:13.189836   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:13.204059   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:13.204091   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:13.273702   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:13.273723   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:13.273735   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:13.358979   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:13.359037   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:11.260422   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.260760   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:11.812182   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.812777   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:15.752291   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:17.752500   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:15.899388   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:15.911944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:15.912013   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:15.946179   66919 cri.go:89] found id: ""
	I0815 01:32:15.946206   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.946215   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:15.946223   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:15.946284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:15.979700   66919 cri.go:89] found id: ""
	I0815 01:32:15.979725   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.979732   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:15.979738   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:15.979784   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:16.013003   66919 cri.go:89] found id: ""
	I0815 01:32:16.013033   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.013044   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:16.013056   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:16.013113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:16.044824   66919 cri.go:89] found id: ""
	I0815 01:32:16.044851   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.044861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:16.044868   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:16.044930   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:16.076193   66919 cri.go:89] found id: ""
	I0815 01:32:16.076219   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.076227   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:16.076232   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:16.076280   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:16.113747   66919 cri.go:89] found id: ""
	I0815 01:32:16.113775   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.113785   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:16.113795   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:16.113855   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:16.145504   66919 cri.go:89] found id: ""
	I0815 01:32:16.145547   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.145560   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:16.145568   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:16.145637   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:16.181581   66919 cri.go:89] found id: ""
	I0815 01:32:16.181613   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.181623   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:16.181634   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:16.181655   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:16.223644   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:16.223687   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:16.279096   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:16.279131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:16.292132   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:16.292161   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:16.360605   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:16.360624   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:16.360636   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:18.938884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:18.951884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:18.951966   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:18.989163   66919 cri.go:89] found id: ""
	I0815 01:32:18.989192   66919 logs.go:276] 0 containers: []
	W0815 01:32:18.989201   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:18.989206   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:18.989256   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:19.025915   66919 cri.go:89] found id: ""
	I0815 01:32:19.025943   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.025952   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:19.025960   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:19.026028   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:19.062863   66919 cri.go:89] found id: ""
	I0815 01:32:19.062889   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.062899   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:19.062907   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:19.062969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:19.099336   66919 cri.go:89] found id: ""
	I0815 01:32:19.099358   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.099369   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:19.099383   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:19.099442   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:19.130944   66919 cri.go:89] found id: ""
	I0815 01:32:19.130977   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.130988   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:19.130995   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:19.131056   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:19.161353   66919 cri.go:89] found id: ""
	I0815 01:32:19.161381   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.161391   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:19.161398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:19.161454   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:19.195867   66919 cri.go:89] found id: ""
	I0815 01:32:19.195902   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.195915   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:19.195923   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:19.195993   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:19.228851   66919 cri.go:89] found id: ""
	I0815 01:32:19.228886   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.228899   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:19.228919   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:19.228938   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:19.281284   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:19.281320   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:19.294742   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:19.294771   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:19.364684   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:19.364708   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:19.364722   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:19.451057   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:19.451092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:15.261508   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:17.261956   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:19.760608   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:16.312855   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:18.811382   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:20.251898   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:22.252179   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:24.252312   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:21.989302   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:22.002691   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:22.002755   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:22.037079   66919 cri.go:89] found id: ""
	I0815 01:32:22.037101   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.037109   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:22.037115   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:22.037162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:22.069804   66919 cri.go:89] found id: ""
	I0815 01:32:22.069833   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.069842   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:22.069848   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:22.069919   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.102474   66919 cri.go:89] found id: ""
	I0815 01:32:22.102503   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.102515   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:22.102523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:22.102587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:22.137416   66919 cri.go:89] found id: ""
	I0815 01:32:22.137442   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.137449   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:22.137454   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:22.137511   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:22.171153   66919 cri.go:89] found id: ""
	I0815 01:32:22.171182   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.171191   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:22.171198   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:22.171259   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:22.207991   66919 cri.go:89] found id: ""
	I0815 01:32:22.208020   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.208029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:22.208038   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:22.208111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:22.245727   66919 cri.go:89] found id: ""
	I0815 01:32:22.245757   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.245767   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:22.245774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:22.245838   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:22.284478   66919 cri.go:89] found id: ""
	I0815 01:32:22.284502   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.284510   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:22.284518   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:22.284529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:22.297334   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:22.297378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:22.369318   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:22.369342   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:22.369356   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:22.445189   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:22.445226   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:22.486563   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:22.486592   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.037875   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:25.051503   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:25.051580   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:25.090579   66919 cri.go:89] found id: ""
	I0815 01:32:25.090610   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.090622   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:25.090629   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:25.090691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:25.123683   66919 cri.go:89] found id: ""
	I0815 01:32:25.123711   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.123722   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:25.123729   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:25.123790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.261478   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:24.760607   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:20.812971   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:23.311523   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:25.313928   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:26.752024   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.252947   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:25.155715   66919 cri.go:89] found id: ""
	I0815 01:32:25.155744   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.155752   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:25.155757   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:25.155806   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:25.186654   66919 cri.go:89] found id: ""
	I0815 01:32:25.186680   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.186688   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:25.186694   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:25.186741   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:25.218636   66919 cri.go:89] found id: ""
	I0815 01:32:25.218665   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.218674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:25.218679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:25.218729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:25.250018   66919 cri.go:89] found id: ""
	I0815 01:32:25.250046   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.250116   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:25.250147   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:25.250219   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:25.283374   66919 cri.go:89] found id: ""
	I0815 01:32:25.283403   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.283413   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:25.283420   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:25.283483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:25.315240   66919 cri.go:89] found id: ""
	I0815 01:32:25.315260   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.315267   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:25.315274   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:25.315286   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.367212   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:25.367243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:25.380506   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:25.380531   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:25.441106   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:25.441129   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:25.441145   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:25.522791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:25.522828   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.061984   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:28.075091   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:28.075149   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:28.110375   66919 cri.go:89] found id: ""
	I0815 01:32:28.110407   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.110419   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:28.110426   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:28.110490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:28.146220   66919 cri.go:89] found id: ""
	I0815 01:32:28.146249   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.146258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:28.146264   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:28.146317   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:28.177659   66919 cri.go:89] found id: ""
	I0815 01:32:28.177691   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.177702   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:28.177708   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:28.177776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:28.209729   66919 cri.go:89] found id: ""
	I0815 01:32:28.209759   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.209768   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:28.209775   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:28.209835   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:28.241605   66919 cri.go:89] found id: ""
	I0815 01:32:28.241633   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.241642   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:28.241646   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:28.241706   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:28.276697   66919 cri.go:89] found id: ""
	I0815 01:32:28.276722   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.276730   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:28.276735   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:28.276785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:28.309109   66919 cri.go:89] found id: ""
	I0815 01:32:28.309134   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.309144   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:28.309151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:28.309213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:28.348262   66919 cri.go:89] found id: ""
	I0815 01:32:28.348289   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.348303   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:28.348315   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:28.348329   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.387270   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:28.387296   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:28.440454   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:28.440504   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:28.453203   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:28.453233   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:28.523080   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:28.523106   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:28.523123   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:26.761742   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.261323   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:27.812457   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.812954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:31.253078   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:33.755301   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:31.098144   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:31.111396   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:31.111469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:31.143940   66919 cri.go:89] found id: ""
	I0815 01:32:31.143969   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.143977   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:31.143983   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:31.144038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:31.175393   66919 cri.go:89] found id: ""
	I0815 01:32:31.175421   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.175439   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:31.175447   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:31.175509   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:31.213955   66919 cri.go:89] found id: ""
	I0815 01:32:31.213984   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.213993   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:31.213998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:31.214047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:31.245836   66919 cri.go:89] found id: ""
	I0815 01:32:31.245861   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.245868   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:31.245873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:31.245936   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:31.279290   66919 cri.go:89] found id: ""
	I0815 01:32:31.279317   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.279327   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:31.279334   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:31.279408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:31.313898   66919 cri.go:89] found id: ""
	I0815 01:32:31.313926   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.313937   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:31.313944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:31.314020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:31.344466   66919 cri.go:89] found id: ""
	I0815 01:32:31.344502   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.344513   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:31.344521   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:31.344586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:31.375680   66919 cri.go:89] found id: ""
	I0815 01:32:31.375709   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.375721   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:31.375732   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:31.375747   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:31.457005   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:31.457048   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.494656   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:31.494691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:31.546059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:31.546096   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:31.559523   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:31.559553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:31.628402   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.128980   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:34.142151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:34.142216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:34.189425   66919 cri.go:89] found id: ""
	I0815 01:32:34.189453   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.189464   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:34.189470   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:34.189533   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:34.222360   66919 cri.go:89] found id: ""
	I0815 01:32:34.222385   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.222392   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:34.222398   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:34.222453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:34.256275   66919 cri.go:89] found id: ""
	I0815 01:32:34.256302   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.256314   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:34.256322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:34.256387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:34.294104   66919 cri.go:89] found id: ""
	I0815 01:32:34.294130   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.294137   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:34.294143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:34.294214   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:34.330163   66919 cri.go:89] found id: ""
	I0815 01:32:34.330193   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.330205   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:34.330213   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:34.330278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:34.363436   66919 cri.go:89] found id: ""
	I0815 01:32:34.363464   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.363475   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:34.363483   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:34.363540   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:34.399733   66919 cri.go:89] found id: ""
	I0815 01:32:34.399761   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.399772   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:34.399779   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:34.399832   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:34.433574   66919 cri.go:89] found id: ""
	I0815 01:32:34.433781   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.433804   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:34.433820   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:34.433839   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:34.488449   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:34.488496   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:34.502743   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:34.502776   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:34.565666   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.565701   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:34.565718   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:34.639463   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:34.639498   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.262299   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:33.760758   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:32.313372   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:34.812259   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:36.251156   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:38.252330   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:37.189617   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:37.202695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:37.202766   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:37.235556   66919 cri.go:89] found id: ""
	I0815 01:32:37.235589   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.235600   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:37.235608   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:37.235669   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:37.271110   66919 cri.go:89] found id: ""
	I0815 01:32:37.271139   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.271150   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:37.271158   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:37.271216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:37.304294   66919 cri.go:89] found id: ""
	I0815 01:32:37.304325   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.304332   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:37.304337   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:37.304398   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:37.337271   66919 cri.go:89] found id: ""
	I0815 01:32:37.337297   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.337309   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:37.337317   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:37.337377   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:37.373088   66919 cri.go:89] found id: ""
	I0815 01:32:37.373115   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.373126   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:37.373133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:37.373184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:37.407978   66919 cri.go:89] found id: ""
	I0815 01:32:37.408003   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.408011   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:37.408016   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:37.408065   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:37.441966   66919 cri.go:89] found id: ""
	I0815 01:32:37.441999   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.442009   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:37.442017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:37.442079   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:37.473670   66919 cri.go:89] found id: ""
	I0815 01:32:37.473699   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.473710   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:37.473720   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:37.473740   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:37.509174   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:37.509208   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:37.560059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:37.560099   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:37.574425   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:37.574458   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:37.639177   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:37.639199   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:37.639216   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:36.260796   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:38.261082   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:36.813759   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:39.312862   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:40.752526   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:43.251946   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:40.218504   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:40.231523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:40.231626   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:40.266065   66919 cri.go:89] found id: ""
	I0815 01:32:40.266092   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.266102   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:40.266109   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:40.266174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:40.298717   66919 cri.go:89] found id: ""
	I0815 01:32:40.298749   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.298759   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:40.298767   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:40.298821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:40.330633   66919 cri.go:89] found id: ""
	I0815 01:32:40.330660   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.330668   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:40.330674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:40.330738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:40.367840   66919 cri.go:89] found id: ""
	I0815 01:32:40.367866   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.367876   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:40.367884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:40.367953   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:40.403883   66919 cri.go:89] found id: ""
	I0815 01:32:40.403910   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.403921   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:40.403927   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:40.404001   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:40.433989   66919 cri.go:89] found id: ""
	I0815 01:32:40.434016   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.434029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:40.434036   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:40.434098   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:40.468173   66919 cri.go:89] found id: ""
	I0815 01:32:40.468202   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.468213   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:40.468220   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:40.468278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:40.502701   66919 cri.go:89] found id: ""
	I0815 01:32:40.502726   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.502737   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:40.502748   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:40.502772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:40.582716   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:40.582751   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:40.582766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:40.663875   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:40.663910   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.710394   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:40.710439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:40.763015   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:40.763044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.276542   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:43.289311   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:43.289375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:43.334368   66919 cri.go:89] found id: ""
	I0815 01:32:43.334398   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.334408   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:43.334416   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:43.334480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:43.367778   66919 cri.go:89] found id: ""
	I0815 01:32:43.367810   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.367821   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:43.367829   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:43.367890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:43.408036   66919 cri.go:89] found id: ""
	I0815 01:32:43.408060   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.408067   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:43.408072   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:43.408126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:43.442240   66919 cri.go:89] found id: ""
	I0815 01:32:43.442264   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.442276   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:43.442282   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:43.442366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:43.475071   66919 cri.go:89] found id: ""
	I0815 01:32:43.475103   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.475113   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:43.475123   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:43.475189   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:43.508497   66919 cri.go:89] found id: ""
	I0815 01:32:43.508526   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.508536   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:43.508543   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:43.508601   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:43.544292   66919 cri.go:89] found id: ""
	I0815 01:32:43.544315   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.544322   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:43.544328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:43.544390   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:43.582516   66919 cri.go:89] found id: ""
	I0815 01:32:43.582544   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.582556   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:43.582567   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:43.582583   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:43.633821   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:43.633853   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.647453   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:43.647478   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:43.715818   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:43.715839   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:43.715850   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:43.798131   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:43.798167   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.262028   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:42.262223   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:44.760964   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:41.813262   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:43.813491   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:45.751794   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:47.751852   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:49.752186   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:46.337867   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:46.364553   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:46.364629   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:46.426611   66919 cri.go:89] found id: ""
	I0815 01:32:46.426642   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.426654   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:46.426662   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:46.426724   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:46.461160   66919 cri.go:89] found id: ""
	I0815 01:32:46.461194   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.461201   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:46.461206   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:46.461262   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:46.492542   66919 cri.go:89] found id: ""
	I0815 01:32:46.492566   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.492576   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:46.492583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:46.492643   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:46.526035   66919 cri.go:89] found id: ""
	I0815 01:32:46.526060   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.526068   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:46.526075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:46.526131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:46.558867   66919 cri.go:89] found id: ""
	I0815 01:32:46.558895   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.558903   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:46.558909   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:46.558969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:46.593215   66919 cri.go:89] found id: ""
	I0815 01:32:46.593243   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.593258   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:46.593264   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:46.593345   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:46.626683   66919 cri.go:89] found id: ""
	I0815 01:32:46.626710   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.626720   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:46.626727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:46.626786   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:46.660687   66919 cri.go:89] found id: ""
	I0815 01:32:46.660716   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.660727   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:46.660738   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:46.660754   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:46.710639   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:46.710670   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:46.723378   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:46.723402   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:46.790906   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:46.790931   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:46.790946   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:46.876843   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:46.876877   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:49.421563   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:49.434606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:49.434688   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:49.468855   66919 cri.go:89] found id: ""
	I0815 01:32:49.468884   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.468895   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:49.468900   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:49.468958   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:49.507477   66919 cri.go:89] found id: ""
	I0815 01:32:49.507507   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.507519   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:49.507526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:49.507586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:49.539825   66919 cri.go:89] found id: ""
	I0815 01:32:49.539855   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.539866   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:49.539873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:49.539925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:49.570812   66919 cri.go:89] found id: ""
	I0815 01:32:49.570841   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.570851   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:49.570858   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:49.570910   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:49.604327   66919 cri.go:89] found id: ""
	I0815 01:32:49.604356   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.604367   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:49.604374   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:49.604456   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:49.640997   66919 cri.go:89] found id: ""
	I0815 01:32:49.641029   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.641042   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:49.641051   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:49.641116   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:49.673274   66919 cri.go:89] found id: ""
	I0815 01:32:49.673303   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.673314   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:49.673322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:49.673381   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:49.708863   66919 cri.go:89] found id: ""
	I0815 01:32:49.708890   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.708897   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:49.708905   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:49.708916   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:49.759404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:49.759431   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:49.773401   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:49.773429   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:49.842512   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:49.842539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:49.842557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:49.923996   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:49.924030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:46.760999   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:48.762058   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:46.312409   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:48.313081   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:51.752324   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:53.752358   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:52.459672   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:52.472149   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:52.472218   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:52.508168   66919 cri.go:89] found id: ""
	I0815 01:32:52.508193   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.508202   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:52.508207   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:52.508260   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:52.543741   66919 cri.go:89] found id: ""
	I0815 01:32:52.543770   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.543788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:52.543796   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:52.543850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:52.575833   66919 cri.go:89] found id: ""
	I0815 01:32:52.575865   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.575876   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:52.575883   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:52.575950   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:52.607593   66919 cri.go:89] found id: ""
	I0815 01:32:52.607627   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.607638   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:52.607645   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:52.607705   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:52.641726   66919 cri.go:89] found id: ""
	I0815 01:32:52.641748   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.641757   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:52.641763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:52.641820   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:52.673891   66919 cri.go:89] found id: ""
	I0815 01:32:52.673918   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.673926   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:52.673932   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:52.673989   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:52.705405   66919 cri.go:89] found id: ""
	I0815 01:32:52.705465   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.705479   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:52.705488   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:52.705683   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:52.739413   66919 cri.go:89] found id: ""
	I0815 01:32:52.739442   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.739455   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:52.739466   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:52.739481   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:52.791891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:52.791926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:52.806154   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:52.806184   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:52.871807   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:52.871833   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:52.871848   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:52.955257   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:52.955299   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:51.261339   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:53.760453   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:50.811954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:52.814155   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.315451   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.753146   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:58.251418   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.498326   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:55.511596   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:55.511674   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:55.545372   66919 cri.go:89] found id: ""
	I0815 01:32:55.545397   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.545405   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:55.545410   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:55.545469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:55.578661   66919 cri.go:89] found id: ""
	I0815 01:32:55.578687   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.578699   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:55.578706   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:55.578774   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:55.612071   66919 cri.go:89] found id: ""
	I0815 01:32:55.612096   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.612104   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:55.612109   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:55.612167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:55.647842   66919 cri.go:89] found id: ""
	I0815 01:32:55.647870   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.647879   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:55.647884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:55.647946   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:55.683145   66919 cri.go:89] found id: ""
	I0815 01:32:55.683171   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.683179   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:55.683185   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:55.683237   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:55.716485   66919 cri.go:89] found id: ""
	I0815 01:32:55.716513   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.716524   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:55.716529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:55.716588   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:55.751649   66919 cri.go:89] found id: ""
	I0815 01:32:55.751673   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.751681   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:55.751689   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:55.751748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:55.786292   66919 cri.go:89] found id: ""
	I0815 01:32:55.786322   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.786333   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:55.786345   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:55.786362   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:55.837633   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:55.837680   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:55.851624   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:55.851697   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:55.920496   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:55.920518   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:55.920532   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:55.998663   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:55.998700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:58.538202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:58.550630   66919 kubeadm.go:597] duration metric: took 4m4.454171061s to restartPrimaryControlPlane
	W0815 01:32:58.550719   66919 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:32:58.550763   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:32:55.760913   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:57.761301   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:57.812542   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:59.812797   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:00.251492   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.751937   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.968200   66919 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.417406165s)
	I0815 01:33:02.968273   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:02.984328   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:02.994147   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:03.003703   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:03.003745   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:03.003799   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:03.012560   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:03.012629   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:03.021480   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:03.030121   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:03.030185   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:03.039216   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.047790   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:03.047854   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.056508   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:03.065001   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:03.065059   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:03.073818   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:03.286102   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:33:00.260884   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.261081   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:04.261431   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.312430   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:04.811970   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:05.252564   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:07.751944   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:09.752232   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:06.262039   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:08.760900   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:06.812188   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:08.812782   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.752403   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:14.251873   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.261490   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:13.760541   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.312341   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:13.313036   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:16.252242   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:18.252528   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:15.761353   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:18.261298   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:15.812234   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:17.812936   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.312284   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.752195   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:23.253836   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.262317   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:22.760573   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:24.760639   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:22.812596   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:25.313723   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:25.751279   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.751900   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.260523   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:29.261069   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.314902   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:29.812210   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:30.306422   67000 pod_ready.go:81] duration metric: took 4m0.000133706s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" ...
	E0815 01:33:30.306452   67000 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 01:33:30.306487   67000 pod_ready.go:38] duration metric: took 4m9.54037853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:33:30.306516   67000 kubeadm.go:597] duration metric: took 4m18.620065579s to restartPrimaryControlPlane
	W0815 01:33:30.306585   67000 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:33:30.306616   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:33:30.251274   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:32.251733   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:34.261342   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:31.261851   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:33.760731   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:36.752156   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:39.251042   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:35.761425   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:38.260168   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:41.252730   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:43.751914   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:40.260565   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:42.261544   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:44.263225   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:45.752581   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:48.251003   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:46.760884   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:49.259955   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:50.251655   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:52.751031   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:52.751064   67451 pod_ready.go:81] duration metric: took 4m0.00559932s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	E0815 01:33:52.751076   67451 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 01:33:52.751088   67451 pod_ready.go:38] duration metric: took 4m2.403367614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:33:52.751108   67451 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:33:52.751143   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:33:52.751205   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:33:52.795646   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:52.795671   67451 cri.go:89] found id: ""
	I0815 01:33:52.795680   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:33:52.795738   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.800301   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:33:52.800378   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:33:52.832704   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:52.832723   67451 cri.go:89] found id: ""
	I0815 01:33:52.832731   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:33:52.832789   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.836586   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:33:52.836647   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:33:52.871782   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:52.871806   67451 cri.go:89] found id: ""
	I0815 01:33:52.871814   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:33:52.871865   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.875939   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:33:52.876003   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:33:52.911531   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:52.911559   67451 cri.go:89] found id: ""
	I0815 01:33:52.911568   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:33:52.911618   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.915944   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:33:52.916044   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:33:52.950344   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:52.950370   67451 cri.go:89] found id: ""
	I0815 01:33:52.950379   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:33:52.950429   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.954361   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:33:52.954423   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:33:52.988534   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:52.988560   67451 cri.go:89] found id: ""
	I0815 01:33:52.988568   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:33:52.988614   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.992310   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:33:52.992362   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:33:53.024437   67451 cri.go:89] found id: ""
	I0815 01:33:53.024464   67451 logs.go:276] 0 containers: []
	W0815 01:33:53.024472   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:33:53.024477   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:33:53.024540   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:33:53.065265   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:53.065294   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:53.065300   67451 cri.go:89] found id: ""
	I0815 01:33:53.065309   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:33:53.065371   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:53.069355   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:53.073218   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:33:53.073241   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:53.111718   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:33:53.111748   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:53.168887   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:33:53.168916   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:53.205011   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:33:53.205047   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:53.236754   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:33:53.236783   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:33:53.717444   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:33:53.717479   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:33:53.730786   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:33:53.730822   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:53.772883   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:33:53.772915   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:53.811011   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:33:53.811045   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:33:53.850482   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:33:53.850537   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:53.884061   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:33:53.884094   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:33:53.953586   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:33:53.953621   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:33:54.074305   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:33:54.074345   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:51.261543   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:53.761698   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:56.568636   67000 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.261991635s)
	I0815 01:33:56.568725   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:56.585102   67000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:56.595265   67000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:56.606275   67000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:56.606302   67000 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:56.606346   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:56.614847   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:56.614909   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:56.624087   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:56.635940   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:56.635996   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:56.648778   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:56.659984   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:56.660048   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:56.670561   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:56.680716   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:56.680770   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:56.691582   67000 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:56.744053   67000 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:33:56.744448   67000 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:33:56.859803   67000 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:33:56.859986   67000 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:33:56.860126   67000 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:33:56.870201   67000 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:33:56.872775   67000 out.go:204]   - Generating certificates and keys ...
	I0815 01:33:56.872875   67000 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:33:56.872957   67000 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:33:56.873055   67000 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:33:56.873134   67000 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:33:56.873222   67000 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:33:56.873302   67000 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:33:56.873391   67000 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:33:56.873474   67000 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:33:56.873577   67000 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:33:56.873686   67000 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:33:56.873745   67000 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:33:56.873823   67000 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:33:56.993607   67000 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:33:57.204419   67000 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:33:57.427518   67000 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:33:57.816802   67000 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:33:57.976885   67000 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:33:57.977545   67000 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:33:57.980898   67000 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:33:56.622543   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:33:56.645990   67451 api_server.go:72] duration metric: took 4m13.53998694s to wait for apiserver process to appear ...
	I0815 01:33:56.646016   67451 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:33:56.646059   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:33:56.646118   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:33:56.690122   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:56.690169   67451 cri.go:89] found id: ""
	I0815 01:33:56.690180   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:33:56.690253   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.694647   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:33:56.694702   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:33:56.732231   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:56.732269   67451 cri.go:89] found id: ""
	I0815 01:33:56.732279   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:33:56.732341   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.736567   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:33:56.736642   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:33:56.776792   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:56.776816   67451 cri.go:89] found id: ""
	I0815 01:33:56.776827   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:33:56.776886   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.781131   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:33:56.781200   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:33:56.814488   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:56.814514   67451 cri.go:89] found id: ""
	I0815 01:33:56.814524   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:33:56.814598   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.818456   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:33:56.818518   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:33:56.872968   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:56.872988   67451 cri.go:89] found id: ""
	I0815 01:33:56.872998   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:33:56.873059   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.877393   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:33:56.877459   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:33:56.918072   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:56.918169   67451 cri.go:89] found id: ""
	I0815 01:33:56.918185   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:33:56.918247   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.923442   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:33:56.923508   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:33:56.960237   67451 cri.go:89] found id: ""
	I0815 01:33:56.960263   67451 logs.go:276] 0 containers: []
	W0815 01:33:56.960271   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:33:56.960276   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:33:56.960339   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:33:56.995156   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:56.995184   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:56.995189   67451 cri.go:89] found id: ""
	I0815 01:33:56.995195   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:33:56.995253   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.999496   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:57.004450   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:33:57.004478   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:33:57.082294   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:33:57.082336   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:33:57.098629   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:33:57.098662   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:57.132282   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:33:57.132314   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:57.166448   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:33:57.166482   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:57.198997   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:33:57.199027   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:57.232713   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:33:57.232746   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:33:57.684565   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:33:57.684601   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:33:57.736700   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:33:57.736734   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:33:57.847294   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:33:57.847320   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:57.896696   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:33:57.896725   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:57.940766   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:33:57.940799   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:57.979561   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:33:57.979586   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:56.260814   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:58.760911   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:57.982527   67000 out.go:204]   - Booting up control plane ...
	I0815 01:33:57.982632   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:33:57.982740   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:33:57.982828   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:33:58.009596   67000 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:33:58.019089   67000 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:33:58.019165   67000 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:33:58.152279   67000 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:33:58.152459   67000 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:33:58.652446   67000 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.333422ms
	I0815 01:33:58.652548   67000 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:34:03.655057   67000 kubeadm.go:310] [api-check] The API server is healthy after 5.002436765s
	I0815 01:34:03.667810   67000 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:34:03.684859   67000 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:34:03.711213   67000 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:34:03.711523   67000 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-190398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:34:03.722147   67000 kubeadm.go:310] [bootstrap-token] Using token: rpl4uv.hjs6pd4939cxws48
	I0815 01:34:00.548574   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:34:00.554825   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 200:
	ok
	I0815 01:34:00.556191   67451 api_server.go:141] control plane version: v1.31.0
	I0815 01:34:00.556215   67451 api_server.go:131] duration metric: took 3.910191173s to wait for apiserver health ...
	I0815 01:34:00.556225   67451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:34:00.556253   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:34:00.556316   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:34:00.603377   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:34:00.603404   67451 cri.go:89] found id: ""
	I0815 01:34:00.603413   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:34:00.603471   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.608674   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:34:00.608747   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:34:00.660318   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:34:00.660346   67451 cri.go:89] found id: ""
	I0815 01:34:00.660355   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:34:00.660450   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.664411   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:34:00.664483   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:34:00.710148   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:34:00.710178   67451 cri.go:89] found id: ""
	I0815 01:34:00.710188   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:34:00.710255   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.714877   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:34:00.714936   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:34:00.750324   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:34:00.750352   67451 cri.go:89] found id: ""
	I0815 01:34:00.750361   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:34:00.750423   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.754304   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:34:00.754377   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:34:00.797956   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:34:00.797980   67451 cri.go:89] found id: ""
	I0815 01:34:00.797989   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:34:00.798053   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.802260   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:34:00.802362   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:34:00.841502   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:34:00.841529   67451 cri.go:89] found id: ""
	I0815 01:34:00.841539   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:34:00.841599   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.845398   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:34:00.845454   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:34:00.882732   67451 cri.go:89] found id: ""
	I0815 01:34:00.882769   67451 logs.go:276] 0 containers: []
	W0815 01:34:00.882779   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:34:00.882786   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:34:00.882855   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:34:00.924913   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:34:00.924942   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:34:00.924948   67451 cri.go:89] found id: ""
	I0815 01:34:00.924958   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:34:00.925019   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.929047   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.932838   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:34:00.932862   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:34:00.975515   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:34:00.975544   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:34:01.041578   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:34:01.041616   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:34:01.083548   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:34:01.083584   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:34:01.181982   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:34:01.182028   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:34:01.197180   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:34:01.197222   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:34:01.296173   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:34:01.296215   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:34:01.348591   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:34:01.348621   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:34:01.385258   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:34:01.385290   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:34:01.760172   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:34:01.760228   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:34:01.811334   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:34:01.811371   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:34:01.855563   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:34:01.855602   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:34:01.891834   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:34:01.891871   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:34:04.440542   67451 system_pods.go:59] 8 kube-system pods found
	I0815 01:34:04.440582   67451 system_pods.go:61] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running
	I0815 01:34:04.440590   67451 system_pods.go:61] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running
	I0815 01:34:04.440596   67451 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running
	I0815 01:34:04.440602   67451 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running
	I0815 01:34:04.440607   67451 system_pods.go:61] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:34:04.440612   67451 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running
	I0815 01:34:04.440622   67451 system_pods.go:61] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:04.440627   67451 system_pods.go:61] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:34:04.440636   67451 system_pods.go:74] duration metric: took 3.884405315s to wait for pod list to return data ...
	I0815 01:34:04.440643   67451 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:34:04.443705   67451 default_sa.go:45] found service account: "default"
	I0815 01:34:04.443728   67451 default_sa.go:55] duration metric: took 3.078997ms for default service account to be created ...
	I0815 01:34:04.443736   67451 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:34:04.451338   67451 system_pods.go:86] 8 kube-system pods found
	I0815 01:34:04.451370   67451 system_pods.go:89] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running
	I0815 01:34:04.451379   67451 system_pods.go:89] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running
	I0815 01:34:04.451386   67451 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running
	I0815 01:34:04.451394   67451 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running
	I0815 01:34:04.451401   67451 system_pods.go:89] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:34:04.451408   67451 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running
	I0815 01:34:04.451419   67451 system_pods.go:89] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:04.451430   67451 system_pods.go:89] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:34:04.451443   67451 system_pods.go:126] duration metric: took 7.701241ms to wait for k8s-apps to be running ...
	I0815 01:34:04.451455   67451 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:34:04.451507   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:04.468766   67451 system_svc.go:56] duration metric: took 17.300221ms WaitForService to wait for kubelet
	I0815 01:34:04.468801   67451 kubeadm.go:582] duration metric: took 4m21.362801315s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:34:04.468832   67451 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:34:04.472507   67451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:34:04.472531   67451 node_conditions.go:123] node cpu capacity is 2
	I0815 01:34:04.472542   67451 node_conditions.go:105] duration metric: took 3.704147ms to run NodePressure ...
	I0815 01:34:04.472565   67451 start.go:241] waiting for startup goroutines ...
	I0815 01:34:04.472575   67451 start.go:246] waiting for cluster config update ...
	I0815 01:34:04.472588   67451 start.go:255] writing updated cluster config ...
	I0815 01:34:04.472865   67451 ssh_runner.go:195] Run: rm -f paused
	I0815 01:34:04.527726   67451 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:34:04.529173   67451 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-018537" cluster and "default" namespace by default
	I0815 01:34:03.723380   67000 out.go:204]   - Configuring RBAC rules ...
	I0815 01:34:03.723547   67000 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:34:03.729240   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:34:03.737279   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:34:03.740490   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:34:03.747717   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:34:03.751107   67000 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:34:04.063063   67000 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:34:04.490218   67000 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:34:05.062068   67000 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:34:05.065926   67000 kubeadm.go:310] 
	I0815 01:34:05.065991   67000 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:34:05.066017   67000 kubeadm.go:310] 
	I0815 01:34:05.066103   67000 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:34:05.066114   67000 kubeadm.go:310] 
	I0815 01:34:05.066148   67000 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:34:05.066211   67000 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:34:05.066286   67000 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:34:05.066298   67000 kubeadm.go:310] 
	I0815 01:34:05.066368   67000 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:34:05.066377   67000 kubeadm.go:310] 
	I0815 01:34:05.066416   67000 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:34:05.066423   67000 kubeadm.go:310] 
	I0815 01:34:05.066499   67000 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:34:05.066602   67000 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:34:05.066692   67000 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:34:05.066699   67000 kubeadm.go:310] 
	I0815 01:34:05.066766   67000 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:34:05.066829   67000 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:34:05.066835   67000 kubeadm.go:310] 
	I0815 01:34:05.066958   67000 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rpl4uv.hjs6pd4939cxws48 \
	I0815 01:34:05.067094   67000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:34:05.067122   67000 kubeadm.go:310] 	--control-plane 
	I0815 01:34:05.067130   67000 kubeadm.go:310] 
	I0815 01:34:05.067246   67000 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:34:05.067257   67000 kubeadm.go:310] 
	I0815 01:34:05.067360   67000 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rpl4uv.hjs6pd4939cxws48 \
	I0815 01:34:05.067496   67000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:34:05.068747   67000 kubeadm.go:310] W0815 01:33:56.716635    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:05.069045   67000 kubeadm.go:310] W0815 01:33:56.717863    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:05.069191   67000 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:05.069220   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:34:05.069231   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:34:05.070969   67000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:34:00.761976   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:03.263360   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:05.072063   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:34:05.081962   67000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:34:05.106105   67000 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:34:05.106173   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:05.106224   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-190398 minikube.k8s.io/updated_at=2024_08_15T01_34_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=embed-certs-190398 minikube.k8s.io/primary=true
	I0815 01:34:05.282543   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:05.282564   67000 ops.go:34] apiserver oom_adj: -16
	I0815 01:34:05.783320   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:06.282990   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:06.782692   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:07.283083   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:07.783174   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:08.283580   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:08.783293   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:09.282718   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:09.384394   67000 kubeadm.go:1113] duration metric: took 4.278268585s to wait for elevateKubeSystemPrivileges
	I0815 01:34:09.384433   67000 kubeadm.go:394] duration metric: took 4m57.749730888s to StartCluster
	I0815 01:34:09.384454   67000 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:09.384550   67000 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:34:09.386694   67000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:09.386961   67000 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:34:09.387019   67000 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:34:09.387099   67000 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-190398"
	I0815 01:34:09.387109   67000 addons.go:69] Setting default-storageclass=true in profile "embed-certs-190398"
	I0815 01:34:09.387133   67000 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-190398"
	I0815 01:34:09.387144   67000 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-190398"
	W0815 01:34:09.387147   67000 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:34:09.387165   67000 addons.go:69] Setting metrics-server=true in profile "embed-certs-190398"
	I0815 01:34:09.387178   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.387189   67000 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:34:09.387205   67000 addons.go:234] Setting addon metrics-server=true in "embed-certs-190398"
	W0815 01:34:09.387216   67000 addons.go:243] addon metrics-server should already be in state true
	I0815 01:34:09.387253   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.387571   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387601   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.387577   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387681   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387729   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.387799   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.388556   67000 out.go:177] * Verifying Kubernetes components...
	I0815 01:34:09.389872   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:34:09.404358   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0815 01:34:09.404925   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.405016   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0815 01:34:09.405505   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.405526   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.405530   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.405878   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.405982   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.405993   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.406352   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.406418   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I0815 01:34:09.406460   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.406477   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.406755   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.406839   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.406876   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.407171   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.407189   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.407518   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.407712   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.412572   67000 addons.go:234] Setting addon default-storageclass=true in "embed-certs-190398"
	W0815 01:34:09.412597   67000 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:34:09.412626   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.413018   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.413049   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.427598   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0815 01:34:09.428087   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.428619   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.428645   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.429079   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.429290   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.430391   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34763
	I0815 01:34:09.430978   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.431199   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.431477   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.431489   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.431839   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.431991   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.433073   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0815 01:34:09.433473   67000 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:34:09.433726   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.433849   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.434259   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.434433   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.434786   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.434987   67000 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:09.435005   67000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:34:09.435026   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.435675   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.435700   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.435887   67000 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:34:05.760130   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:07.760774   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:09.762245   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:09.437621   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:34:09.437643   67000 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:34:09.437664   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.438723   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.439409   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.439431   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.439685   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.439898   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.440245   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.440419   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.440609   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.441353   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.441380   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.441558   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.441712   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.441859   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.441957   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.455864   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
	I0815 01:34:09.456238   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.456858   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.456878   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.457179   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.457413   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.459002   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.459268   67000 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:09.459282   67000 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:34:09.459296   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.461784   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.462170   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.462203   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.462317   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.462491   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.462631   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.462772   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.602215   67000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:34:09.621687   67000 node_ready.go:35] waiting up to 6m0s for node "embed-certs-190398" to be "Ready" ...
	I0815 01:34:09.635114   67000 node_ready.go:49] node "embed-certs-190398" has status "Ready":"True"
	I0815 01:34:09.635146   67000 node_ready.go:38] duration metric: took 13.422205ms for node "embed-certs-190398" to be "Ready" ...
	I0815 01:34:09.635169   67000 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:09.642293   67000 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:09.681219   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:34:09.681242   67000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:34:09.725319   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:34:09.725353   67000 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:34:09.725445   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:09.758901   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:09.758973   67000 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:34:09.809707   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:09.831765   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:10.013580   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.013607   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.013902   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:10.013933   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.013950   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.013968   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.013979   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.014212   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.014227   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.023286   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.023325   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.023618   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.023643   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.023655   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.121834   67000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.312088989s)
	I0815 01:34:11.121883   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.121896   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.122269   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.122304   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.122324   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.122340   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.122354   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.122588   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.122605   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.183170   67000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.351356186s)
	I0815 01:34:11.183232   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.183248   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.183588   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.183604   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.183608   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.183619   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.183627   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.183989   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.184017   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.184031   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.184053   67000 addons.go:475] Verifying addon metrics-server=true in "embed-certs-190398"
	I0815 01:34:11.186460   67000 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0815 01:34:12.261636   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.763849   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:11.187572   67000 addons.go:510] duration metric: took 1.800554463s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0815 01:34:11.653997   67000 pod_ready.go:102] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.149672   67000 pod_ready.go:102] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.652753   67000 pod_ready.go:92] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:14.652782   67000 pod_ready.go:81] duration metric: took 5.0104594s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:14.652794   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:16.662387   67000 pod_ready.go:102] pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:17.158847   67000 pod_ready.go:92] pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.158877   67000 pod_ready.go:81] duration metric: took 2.50607523s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.158895   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.163274   67000 pod_ready.go:92] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.163295   67000 pod_ready.go:81] duration metric: took 4.392165ms for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.163307   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7hfvr" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.167416   67000 pod_ready.go:92] pod "kube-proxy-7hfvr" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.167436   67000 pod_ready.go:81] duration metric: took 4.122023ms for pod "kube-proxy-7hfvr" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.167447   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.171559   67000 pod_ready.go:92] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.171578   67000 pod_ready.go:81] duration metric: took 4.12361ms for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.171587   67000 pod_ready.go:38] duration metric: took 7.536405023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:17.171605   67000 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:34:17.171665   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:34:17.187336   67000 api_server.go:72] duration metric: took 7.800338922s to wait for apiserver process to appear ...
	I0815 01:34:17.187359   67000 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:34:17.187379   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:34:17.191804   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0815 01:34:17.192705   67000 api_server.go:141] control plane version: v1.31.0
	I0815 01:34:17.192726   67000 api_server.go:131] duration metric: took 5.35969ms to wait for apiserver health ...
	I0815 01:34:17.192739   67000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:34:17.197588   67000 system_pods.go:59] 9 kube-system pods found
	I0815 01:34:17.197618   67000 system_pods.go:61] "coredns-6f6b679f8f-kmmdc" [455019d9-07b5-418e-8668-26272424e96c] Running
	I0815 01:34:17.197626   67000 system_pods.go:61] "coredns-6f6b679f8f-kx2xv" [81e26858-a527-4f0d-a7fd-e5c3f82b29bc] Running
	I0815 01:34:17.197632   67000 system_pods.go:61] "etcd-embed-certs-190398" [0767f386-4cff-4c02-9c5c-ec334dd15d3d] Running
	I0815 01:34:17.197638   67000 system_pods.go:61] "kube-apiserver-embed-certs-190398" [737db54b-50eb-4fea-93a0-7e95d645b77f] Running
	I0815 01:34:17.197644   67000 system_pods.go:61] "kube-controller-manager-embed-certs-190398" [4767eb26-47a6-4dfd-833a-a4e18a57cb7e] Running
	I0815 01:34:17.197649   67000 system_pods.go:61] "kube-proxy-7hfvr" [ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0] Running
	I0815 01:34:17.197655   67000 system_pods.go:61] "kube-scheduler-embed-certs-190398" [0ffcf10e-304e-4837-bd6f-c3b78193b378] Running
	I0815 01:34:17.197665   67000 system_pods.go:61] "metrics-server-6867b74b74-4ldv7" [ea1c5492-373d-445c-a135-b91569186449] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:17.197676   67000 system_pods.go:61] "storage-provisioner" [002656ed-b542-442d-9409-6f0b5cf557dc] Running
	I0815 01:34:17.197688   67000 system_pods.go:74] duration metric: took 4.940904ms to wait for pod list to return data ...
	I0815 01:34:17.197699   67000 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:34:17.200172   67000 default_sa.go:45] found service account: "default"
	I0815 01:34:17.200190   67000 default_sa.go:55] duration metric: took 2.484111ms for default service account to be created ...
	I0815 01:34:17.200198   67000 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:34:17.359981   67000 system_pods.go:86] 9 kube-system pods found
	I0815 01:34:17.360011   67000 system_pods.go:89] "coredns-6f6b679f8f-kmmdc" [455019d9-07b5-418e-8668-26272424e96c] Running
	I0815 01:34:17.360019   67000 system_pods.go:89] "coredns-6f6b679f8f-kx2xv" [81e26858-a527-4f0d-a7fd-e5c3f82b29bc] Running
	I0815 01:34:17.360025   67000 system_pods.go:89] "etcd-embed-certs-190398" [0767f386-4cff-4c02-9c5c-ec334dd15d3d] Running
	I0815 01:34:17.360030   67000 system_pods.go:89] "kube-apiserver-embed-certs-190398" [737db54b-50eb-4fea-93a0-7e95d645b77f] Running
	I0815 01:34:17.360036   67000 system_pods.go:89] "kube-controller-manager-embed-certs-190398" [4767eb26-47a6-4dfd-833a-a4e18a57cb7e] Running
	I0815 01:34:17.360042   67000 system_pods.go:89] "kube-proxy-7hfvr" [ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0] Running
	I0815 01:34:17.360047   67000 system_pods.go:89] "kube-scheduler-embed-certs-190398" [0ffcf10e-304e-4837-bd6f-c3b78193b378] Running
	I0815 01:34:17.360058   67000 system_pods.go:89] "metrics-server-6867b74b74-4ldv7" [ea1c5492-373d-445c-a135-b91569186449] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:17.360065   67000 system_pods.go:89] "storage-provisioner" [002656ed-b542-442d-9409-6f0b5cf557dc] Running
	I0815 01:34:17.360078   67000 system_pods.go:126] duration metric: took 159.873802ms to wait for k8s-apps to be running ...
	I0815 01:34:17.360091   67000 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:34:17.360143   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:17.374912   67000 system_svc.go:56] duration metric: took 14.811351ms WaitForService to wait for kubelet
	I0815 01:34:17.374948   67000 kubeadm.go:582] duration metric: took 7.987952187s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:34:17.374977   67000 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:34:17.557650   67000 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:34:17.557681   67000 node_conditions.go:123] node cpu capacity is 2
	I0815 01:34:17.557694   67000 node_conditions.go:105] duration metric: took 182.710819ms to run NodePressure ...
	I0815 01:34:17.557706   67000 start.go:241] waiting for startup goroutines ...
	I0815 01:34:17.557716   67000 start.go:246] waiting for cluster config update ...
	I0815 01:34:17.557728   67000 start.go:255] writing updated cluster config ...
	I0815 01:34:17.557999   67000 ssh_runner.go:195] Run: rm -f paused
	I0815 01:34:17.605428   67000 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:34:17.607344   67000 out.go:177] * Done! kubectl is now configured to use "embed-certs-190398" cluster and "default" namespace by default
	I0815 01:34:17.260406   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:19.260601   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:19.754935   66492 pod_ready.go:81] duration metric: took 4m0.000339545s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" ...
	E0815 01:34:19.754964   66492 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 01:34:19.754984   66492 pod_ready.go:38] duration metric: took 4m6.506948914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:19.755018   66492 kubeadm.go:597] duration metric: took 4m13.922875877s to restartPrimaryControlPlane
	W0815 01:34:19.755082   66492 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:34:19.755112   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:34:45.859009   66492 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.103872856s)
	I0815 01:34:45.859088   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:45.875533   66492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:34:45.885287   66492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:34:45.897067   66492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:34:45.897087   66492 kubeadm.go:157] found existing configuration files:
	
	I0815 01:34:45.897137   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:34:45.907073   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:34:45.907145   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:34:45.916110   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:34:45.925269   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:34:45.925330   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:34:45.934177   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:34:45.942464   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:34:45.942524   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:34:45.951504   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:34:45.961107   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:34:45.961159   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:34:45.970505   66492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:34:46.018530   66492 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:34:46.018721   66492 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:46.125710   66492 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:46.125846   66492 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:46.125961   66492 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:34:46.134089   66492 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:46.135965   66492 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:46.136069   66492 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:46.136157   66492 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:46.136256   66492 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:46.136333   66492 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:46.136442   66492 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:46.136528   66492 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:46.136614   66492 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:46.136736   66492 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:46.136845   66492 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:46.136946   66492 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:46.137066   66492 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:46.137143   66492 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:46.289372   66492 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:46.547577   66492 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:34:46.679039   66492 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:47.039625   66492 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:47.355987   66492 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:47.356514   66492 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:47.359155   66492 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:47.360813   66492 out.go:204]   - Booting up control plane ...
	I0815 01:34:47.360924   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:47.361018   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:47.361140   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:47.386603   66492 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:47.395339   66492 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:47.395391   66492 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:47.526381   66492 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:34:47.526512   66492 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:34:48.027552   66492 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.152677ms
	I0815 01:34:48.027674   66492 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:34:53.029526   66492 kubeadm.go:310] [api-check] The API server is healthy after 5.001814093s
	I0815 01:34:53.043123   66492 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:34:53.061171   66492 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:34:53.093418   66492 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:34:53.093680   66492 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-884893 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:34:53.106103   66492 kubeadm.go:310] [bootstrap-token] Using token: rd520d.rc6325cjita43il4
	I0815 01:34:53.107576   66492 out.go:204]   - Configuring RBAC rules ...
	I0815 01:34:53.107717   66492 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:34:53.112060   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:34:53.122816   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:34:53.126197   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:34:53.129304   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:34:53.133101   66492 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:34:53.436427   66492 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:34:53.891110   66492 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:34:54.439955   66492 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:34:54.441369   66492 kubeadm.go:310] 
	I0815 01:34:54.441448   66492 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:34:54.441457   66492 kubeadm.go:310] 
	I0815 01:34:54.441550   66492 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:34:54.441578   66492 kubeadm.go:310] 
	I0815 01:34:54.441608   66492 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:34:54.441663   66492 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:34:54.441705   66492 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:34:54.441711   66492 kubeadm.go:310] 
	I0815 01:34:54.441777   66492 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:34:54.441784   66492 kubeadm.go:310] 
	I0815 01:34:54.441821   66492 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:34:54.441828   66492 kubeadm.go:310] 
	I0815 01:34:54.441867   66492 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:34:54.441977   66492 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:34:54.442054   66492 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:34:54.442061   66492 kubeadm.go:310] 
	I0815 01:34:54.442149   66492 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:34:54.442255   66492 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:34:54.442265   66492 kubeadm.go:310] 
	I0815 01:34:54.442384   66492 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rd520d.rc6325cjita43il4 \
	I0815 01:34:54.442477   66492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:34:54.442504   66492 kubeadm.go:310] 	--control-plane 
	I0815 01:34:54.442509   66492 kubeadm.go:310] 
	I0815 01:34:54.442591   66492 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:34:54.442598   66492 kubeadm.go:310] 
	I0815 01:34:54.442675   66492 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rd520d.rc6325cjita43il4 \
	I0815 01:34:54.442811   66492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:34:54.444409   66492 kubeadm.go:310] W0815 01:34:45.989583    3035 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:54.444785   66492 kubeadm.go:310] W0815 01:34:45.990491    3035 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:54.444929   66492 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:54.444951   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:34:54.444960   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:34:54.447029   66492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:34:54.448357   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:34:54.460176   66492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:34:54.479219   66492 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:34:54.479299   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:54.479342   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-884893 minikube.k8s.io/updated_at=2024_08_15T01_34_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=no-preload-884893 minikube.k8s.io/primary=true
	I0815 01:34:54.516528   66492 ops.go:34] apiserver oom_adj: -16
	I0815 01:34:54.686689   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:55.186918   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:55.687118   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:56.186740   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:56.687051   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:57.187582   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:57.687662   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:58.187633   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:58.686885   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:59.187093   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:59.280930   66492 kubeadm.go:1113] duration metric: took 4.801695567s to wait for elevateKubeSystemPrivileges
	I0815 01:34:59.280969   66492 kubeadm.go:394] duration metric: took 4m53.494095639s to StartCluster
	I0815 01:34:59.281006   66492 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:59.281099   66492 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:34:59.283217   66492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:59.283528   66492 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:34:59.283693   66492 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:34:59.283649   66492 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:34:59.283734   66492 addons.go:69] Setting storage-provisioner=true in profile "no-preload-884893"
	I0815 01:34:59.283743   66492 addons.go:69] Setting metrics-server=true in profile "no-preload-884893"
	I0815 01:34:59.283742   66492 addons.go:69] Setting default-storageclass=true in profile "no-preload-884893"
	I0815 01:34:59.283768   66492 addons.go:234] Setting addon metrics-server=true in "no-preload-884893"
	I0815 01:34:59.283770   66492 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-884893"
	I0815 01:34:59.283768   66492 addons.go:234] Setting addon storage-provisioner=true in "no-preload-884893"
	W0815 01:34:59.283882   66492 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:34:59.283912   66492 host.go:66] Checking if "no-preload-884893" exists ...
	W0815 01:34:59.283778   66492 addons.go:243] addon metrics-server should already be in state true
	I0815 01:34:59.283990   66492 host.go:66] Checking if "no-preload-884893" exists ...
	I0815 01:34:59.284206   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284238   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.284296   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284321   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.284333   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284347   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.285008   66492 out.go:177] * Verifying Kubernetes components...
	I0815 01:34:59.286336   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:34:59.302646   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I0815 01:34:59.302810   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0815 01:34:59.303084   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303243   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303327   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0815 01:34:59.303705   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.303724   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.303864   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303911   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.303939   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.304044   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304378   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.304397   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.304418   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304643   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.304695   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.304899   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.304912   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304926   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.305098   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.308826   66492 addons.go:234] Setting addon default-storageclass=true in "no-preload-884893"
	W0815 01:34:59.308848   66492 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:34:59.308878   66492 host.go:66] Checking if "no-preload-884893" exists ...
	I0815 01:34:59.309223   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.309255   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.320605   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0815 01:34:59.321021   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.321570   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.321591   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.321942   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.322163   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.323439   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0815 01:34:59.323779   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.324027   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.324168   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.324180   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.324446   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.324885   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.324914   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.325881   66492 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:34:59.326695   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0815 01:34:59.327054   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.327257   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:34:59.327286   66492 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:34:59.327304   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.327551   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.327567   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.327935   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.328243   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.330384   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.330975   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.331491   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.331519   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.331747   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.331916   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.331916   66492 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:34:59.563745   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:34:59.563904   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:34:59.565631   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:34:59.565711   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:59.565827   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:59.565968   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:59.566095   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:34:59.566195   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:59.567850   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:59.567922   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:59.567991   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:59.568091   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:59.568176   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:59.568283   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:59.568377   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:59.568466   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:59.568558   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:59.568674   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:59.568775   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:59.568834   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:59.568920   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:59.568998   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:59.569073   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:59.569162   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:59.569217   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:59.569330   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:59.569429   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:59.569482   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:59.569580   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:59.571031   66919 out.go:204]   - Booting up control plane ...
	I0815 01:34:59.571120   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:59.571198   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:59.571286   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:59.571396   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:59.571643   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:34:59.571729   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:34:59.571830   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572069   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572172   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572422   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572540   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572814   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572913   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573155   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573252   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573474   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573484   66919 kubeadm.go:310] 
	I0815 01:34:59.573543   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:34:59.573601   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:34:59.573610   66919 kubeadm.go:310] 
	I0815 01:34:59.573667   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:34:59.573713   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:34:59.573862   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:34:59.573878   66919 kubeadm.go:310] 
	I0815 01:34:59.574000   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:34:59.574051   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:34:59.574099   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:34:59.574109   66919 kubeadm.go:310] 
	I0815 01:34:59.574262   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:34:59.574379   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:34:59.574387   66919 kubeadm.go:310] 
	I0815 01:34:59.574509   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:34:59.574646   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:34:59.574760   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:34:59.574862   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:34:59.574880   66919 kubeadm.go:310] 
	W0815 01:34:59.574991   66919 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 01:34:59.575044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:35:00.029701   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:00.047125   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:35:00.057309   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:35:00.057336   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:35:00.057396   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:35:00.066837   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:35:00.066901   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:35:00.076722   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:35:00.086798   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:35:00.086862   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:35:00.097486   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.109900   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:35:00.109981   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.122672   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:34:59.332080   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.332258   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.333212   66492 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:59.333230   66492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:34:59.333246   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.336201   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.336699   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.336761   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.336791   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.336965   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.337146   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.337319   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.343978   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42433
	I0815 01:34:59.344425   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.344992   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.345015   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.345400   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.345595   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.347262   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.347490   66492 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:59.347507   66492 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:34:59.347525   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.350390   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.350876   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.350899   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.351072   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.351243   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.351418   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.351543   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.471077   66492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:34:59.500097   66492 node_ready.go:35] waiting up to 6m0s for node "no-preload-884893" to be "Ready" ...
	I0815 01:34:59.509040   66492 node_ready.go:49] node "no-preload-884893" has status "Ready":"True"
	I0815 01:34:59.509063   66492 node_ready.go:38] duration metric: took 8.924177ms for node "no-preload-884893" to be "Ready" ...
	I0815 01:34:59.509075   66492 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:59.515979   66492 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:59.594834   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:34:59.594856   66492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:34:59.597457   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:59.603544   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:59.637080   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:34:59.637109   66492 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:34:59.683359   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:59.683388   66492 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:34:59.730096   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:35:00.403252   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403287   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403477   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403495   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403789   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.403829   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.403850   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403858   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.403868   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403876   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.403891   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403900   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.404115   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.404156   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.404158   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.404162   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.404177   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.404164   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.433823   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.433876   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.434285   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.434398   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.434420   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.674979   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.675008   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.675371   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.675395   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.675421   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.675434   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.675443   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.675706   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.675722   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.675733   66492 addons.go:475] Verifying addon metrics-server=true in "no-preload-884893"
	I0815 01:35:00.677025   66492 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 01:35:00.134512   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:35:00.134579   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:35:00.146901   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:35:00.384725   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:35:00.678492   66492 addons.go:510] duration metric: took 1.394848534s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 01:35:01.522738   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:04.022711   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:06.522906   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:08.523426   66492 pod_ready.go:92] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.523453   66492 pod_ready.go:81] duration metric: took 9.007444319s for pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.523465   66492 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.528447   66492 pod_ready.go:92] pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.528471   66492 pod_ready.go:81] duration metric: took 4.997645ms for pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.528480   66492 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.533058   66492 pod_ready.go:92] pod "etcd-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.533078   66492 pod_ready.go:81] duration metric: took 4.59242ms for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.533088   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.537231   66492 pod_ready.go:92] pod "kube-apiserver-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.537252   66492 pod_ready.go:81] duration metric: took 4.154988ms for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.537261   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.541819   66492 pod_ready.go:92] pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.541840   66492 pod_ready.go:81] duration metric: took 4.572636ms for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.541852   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dpggv" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.920356   66492 pod_ready.go:92] pod "kube-proxy-dpggv" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.920394   66492 pod_ready.go:81] duration metric: took 378.534331ms for pod "kube-proxy-dpggv" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.920407   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:09.320112   66492 pod_ready.go:92] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:09.320135   66492 pod_ready.go:81] duration metric: took 399.72085ms for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:09.320143   66492 pod_ready.go:38] duration metric: took 9.811056504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:35:09.320158   66492 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:35:09.320216   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:35:09.336727   66492 api_server.go:72] duration metric: took 10.053160882s to wait for apiserver process to appear ...
	I0815 01:35:09.336760   66492 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:35:09.336777   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:35:09.340897   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 200:
	ok
	I0815 01:35:09.341891   66492 api_server.go:141] control plane version: v1.31.0
	I0815 01:35:09.341911   66492 api_server.go:131] duration metric: took 5.145922ms to wait for apiserver health ...
	I0815 01:35:09.341919   66492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:35:09.523808   66492 system_pods.go:59] 9 kube-system pods found
	I0815 01:35:09.523839   66492 system_pods.go:61] "coredns-6f6b679f8f-srq48" [e9520ab8-24d6-410d-bcba-b59e91e817a9] Running
	I0815 01:35:09.523844   66492 system_pods.go:61] "coredns-6f6b679f8f-t77b6" [fcdf11ef-28a6-428c-b033-e29b51af8f0e] Running
	I0815 01:35:09.523848   66492 system_pods.go:61] "etcd-no-preload-884893" [fa960cfe-331d-4656-93e9-a58921bd62de] Running
	I0815 01:35:09.523851   66492 system_pods.go:61] "kube-apiserver-no-preload-884893" [7a8244fb-aa58-4e8e-957a-f3fbd388837b] Running
	I0815 01:35:09.523857   66492 system_pods.go:61] "kube-controller-manager-no-preload-884893" [0b6c5424-6fe4-42b6-b081-4409f90db35f] Running
	I0815 01:35:09.523860   66492 system_pods.go:61] "kube-proxy-dpggv" [55ef2a4b-a502-452d-a3bd-df1209ff247b] Running
	I0815 01:35:09.523863   66492 system_pods.go:61] "kube-scheduler-no-preload-884893" [cd295ee0-1897-4cd3-896d-09dd36842248] Running
	I0815 01:35:09.523871   66492 system_pods.go:61] "metrics-server-6867b74b74-w47b2" [7423be62-ae01-4b3f-9e24-049f4788f32f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:35:09.523875   66492 system_pods.go:61] "storage-provisioner" [b4cf6d02-281f-4fb5-9ff7-c36143d3af58] Running
	I0815 01:35:09.523883   66492 system_pods.go:74] duration metric: took 181.959474ms to wait for pod list to return data ...
	I0815 01:35:09.523892   66492 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:35:09.720531   66492 default_sa.go:45] found service account: "default"
	I0815 01:35:09.720565   66492 default_sa.go:55] duration metric: took 196.667806ms for default service account to be created ...
	I0815 01:35:09.720574   66492 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:35:09.923419   66492 system_pods.go:86] 9 kube-system pods found
	I0815 01:35:09.923454   66492 system_pods.go:89] "coredns-6f6b679f8f-srq48" [e9520ab8-24d6-410d-bcba-b59e91e817a9] Running
	I0815 01:35:09.923463   66492 system_pods.go:89] "coredns-6f6b679f8f-t77b6" [fcdf11ef-28a6-428c-b033-e29b51af8f0e] Running
	I0815 01:35:09.923471   66492 system_pods.go:89] "etcd-no-preload-884893" [fa960cfe-331d-4656-93e9-a58921bd62de] Running
	I0815 01:35:09.923477   66492 system_pods.go:89] "kube-apiserver-no-preload-884893" [7a8244fb-aa58-4e8e-957a-f3fbd388837b] Running
	I0815 01:35:09.923484   66492 system_pods.go:89] "kube-controller-manager-no-preload-884893" [0b6c5424-6fe4-42b6-b081-4409f90db35f] Running
	I0815 01:35:09.923490   66492 system_pods.go:89] "kube-proxy-dpggv" [55ef2a4b-a502-452d-a3bd-df1209ff247b] Running
	I0815 01:35:09.923494   66492 system_pods.go:89] "kube-scheduler-no-preload-884893" [cd295ee0-1897-4cd3-896d-09dd36842248] Running
	I0815 01:35:09.923502   66492 system_pods.go:89] "metrics-server-6867b74b74-w47b2" [7423be62-ae01-4b3f-9e24-049f4788f32f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:35:09.923509   66492 system_pods.go:89] "storage-provisioner" [b4cf6d02-281f-4fb5-9ff7-c36143d3af58] Running
	I0815 01:35:09.923524   66492 system_pods.go:126] duration metric: took 202.943928ms to wait for k8s-apps to be running ...
	I0815 01:35:09.923533   66492 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:35:09.923586   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:09.938893   66492 system_svc.go:56] duration metric: took 15.353021ms WaitForService to wait for kubelet
	I0815 01:35:09.938917   66492 kubeadm.go:582] duration metric: took 10.655355721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:35:09.938942   66492 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:35:10.120692   66492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:35:10.120717   66492 node_conditions.go:123] node cpu capacity is 2
	I0815 01:35:10.120728   66492 node_conditions.go:105] duration metric: took 181.7794ms to run NodePressure ...
	I0815 01:35:10.120739   66492 start.go:241] waiting for startup goroutines ...
	I0815 01:35:10.120746   66492 start.go:246] waiting for cluster config update ...
	I0815 01:35:10.120754   66492 start.go:255] writing updated cluster config ...
	I0815 01:35:10.121019   66492 ssh_runner.go:195] Run: rm -f paused
	I0815 01:35:10.172726   66492 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:35:10.174631   66492 out.go:177] * Done! kubectl is now configured to use "no-preload-884893" cluster and "default" namespace by default
	I0815 01:36:56.608471   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:36:56.608611   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:36:56.610133   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:36:56.610200   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:36:56.610290   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:36:56.610405   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:36:56.610524   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:36:56.610616   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:36:56.612092   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:36:56.612184   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:36:56.612246   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:36:56.612314   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:36:56.612371   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:36:56.612431   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:36:56.612482   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:36:56.612534   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:36:56.612585   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:36:56.612697   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:36:56.612796   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:36:56.612859   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:36:56.613044   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:36:56.613112   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:36:56.613157   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:36:56.613244   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:36:56.613322   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:36:56.613455   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:36:56.613565   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:36:56.613631   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:36:56.613729   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:36:56.615023   66919 out.go:204]   - Booting up control plane ...
	I0815 01:36:56.615129   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:36:56.615203   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:36:56.615260   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:36:56.615330   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:36:56.615485   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:36:56.615542   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:36:56.615620   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.615805   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.615892   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616085   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616149   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616297   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616355   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616555   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616646   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616833   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616842   66919 kubeadm.go:310] 
	I0815 01:36:56.616873   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:36:56.616905   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:36:56.616912   66919 kubeadm.go:310] 
	I0815 01:36:56.616939   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:36:56.616969   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:36:56.617073   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:36:56.617089   66919 kubeadm.go:310] 
	I0815 01:36:56.617192   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:36:56.617220   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:36:56.617255   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:36:56.617263   66919 kubeadm.go:310] 
	I0815 01:36:56.617393   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:36:56.617469   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:36:56.617478   66919 kubeadm.go:310] 
	I0815 01:36:56.617756   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:36:56.617889   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:36:56.617967   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:36:56.618057   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:36:56.618070   66919 kubeadm.go:310] 
	I0815 01:36:56.618125   66919 kubeadm.go:394] duration metric: took 8m2.571608887s to StartCluster
	I0815 01:36:56.618169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:36:56.618222   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:36:56.659324   66919 cri.go:89] found id: ""
	I0815 01:36:56.659353   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.659365   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:36:56.659372   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:36:56.659443   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:36:56.695979   66919 cri.go:89] found id: ""
	I0815 01:36:56.696003   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.696010   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:36:56.696015   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:36:56.696063   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:36:56.730063   66919 cri.go:89] found id: ""
	I0815 01:36:56.730092   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.730100   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:36:56.730106   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:36:56.730161   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:36:56.763944   66919 cri.go:89] found id: ""
	I0815 01:36:56.763969   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.763983   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:36:56.763988   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:36:56.764047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:36:56.798270   66919 cri.go:89] found id: ""
	I0815 01:36:56.798299   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.798307   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:36:56.798313   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:36:56.798366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:36:56.832286   66919 cri.go:89] found id: ""
	I0815 01:36:56.832318   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.832328   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:36:56.832335   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:36:56.832410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:36:56.866344   66919 cri.go:89] found id: ""
	I0815 01:36:56.866380   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.866390   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:36:56.866398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:36:56.866461   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:36:56.904339   66919 cri.go:89] found id: ""
	I0815 01:36:56.904366   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.904375   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:36:56.904387   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:36:56.904405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:36:56.982024   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:36:56.982045   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:36:56.982057   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:36:57.092250   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:36:57.092288   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:36:57.157548   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:36:57.157582   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:36:57.216511   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:36:57.216563   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 01:36:57.230210   66919 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 01:36:57.230256   66919 out.go:239] * 
	W0815 01:36:57.230316   66919 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.230347   66919 out.go:239] * 
	W0815 01:36:57.231157   66919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:36:57.234003   66919 out.go:177] 
	W0815 01:36:57.235088   66919 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.235127   66919 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 01:36:57.235146   66919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 01:36:57.236647   66919 out.go:177] 
	
	
	==> CRI-O <==
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.483041219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686362483014407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43089afe-d642-4e4b-949c-bac53ce4ac38 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.483526395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff39ce5b-5493-42a3-8df5-fb2dbd64a176 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.483600071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff39ce5b-5493-42a3-8df5-fb2dbd64a176 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.483644425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ff39ce5b-5493-42a3-8df5-fb2dbd64a176 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.512351290Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75692f99-0176-4272-9a63-b3e6e0e5a9ce name=/runtime.v1.RuntimeService/Version
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.512441270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75692f99-0176-4272-9a63-b3e6e0e5a9ce name=/runtime.v1.RuntimeService/Version
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.513659295Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a620931-8b27-477c-926c-303a0545ef0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.514254477Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686362514212821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a620931-8b27-477c-926c-303a0545ef0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.514817757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0f67ba9-c19c-4b49-a380-d7e6b2be4ac5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.514915872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0f67ba9-c19c-4b49-a380-d7e6b2be4ac5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.514977281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a0f67ba9-c19c-4b49-a380-d7e6b2be4ac5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.544078082Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=097176b5-5387-43a5-8cc3-319b07eca769 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.544154264Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=097176b5-5387-43a5-8cc3-319b07eca769 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.545409550Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87095305-680d-4d6a-a1d6-16309d994a91 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.545852539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686362545826469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87095305-680d-4d6a-a1d6-16309d994a91 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.546414278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67d13821-5355-435b-ac16-c3737cc21ad7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.546468832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67d13821-5355-435b-ac16-c3737cc21ad7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.546549195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=67d13821-5355-435b-ac16-c3737cc21ad7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.576306197Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d7647aa-ec86-4c99-9e4c-ec41a850e29b name=/runtime.v1.RuntimeService/Version
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.576377256Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d7647aa-ec86-4c99-9e4c-ec41a850e29b name=/runtime.v1.RuntimeService/Version
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.577634547Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94ad6e71-7c49-4975-bd70-16b9e4316622 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.578036658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686362577993285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94ad6e71-7c49-4975-bd70-16b9e4316622 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.578569331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27a5c9bb-4bff-4777-ba47-4312fa7b7870 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.578615929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27a5c9bb-4bff-4777-ba47-4312fa7b7870 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:46:02 old-k8s-version-390782 crio[654]: time="2024-08-15 01:46:02.578650833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=27a5c9bb-4bff-4777-ba47-4312fa7b7870 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug15 01:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050416] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037789] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.678929] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.857055] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.487001] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.860898] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.063147] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057764] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.185464] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.131345] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.258818] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +5.930800] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.065041] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.685778] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	[Aug15 01:29] kauditd_printk_skb: 46 callbacks suppressed
	[Aug15 01:33] systemd-fstab-generator[5155]: Ignoring "noauto" option for root device
	[Aug15 01:35] systemd-fstab-generator[5437]: Ignoring "noauto" option for root device
	[  +0.071528] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:46:02 up 17 min,  0 users,  load average: 0.13, 0.05, 0.03
	Linux old-k8s-version-390782 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a45370, 0xc00092f280)
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]: goroutine 156 [chan receive]:
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000a43a70)
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]: goroutine 157 [select]:
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00090aef0, 0x4f0ac20, 0xc0009fc0f0, 0x1, 0xc00009e0c0)
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001f6380, 0xc00009e0c0)
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a453b0, 0xc00092f340)
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 15 01:46:02 old-k8s-version-390782 kubelet[6628]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 15 01:46:02 old-k8s-version-390782 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 15 01:46:02 old-k8s-version-390782 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-390782 -n old-k8s-version-390782
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 2 (218.300857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-390782" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (530.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-15 01:51:57.55714761 +0000 UTC m=+6392.257379201
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-018537 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-018537 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.978µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-018537 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-018537 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-018537 logs -n 25: (1.084620059s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo cat                           | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo cat                           | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo cat                           | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo docker                        | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo cat                           | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo cat                           | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo cat                           | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo cat                           | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo                               | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo find                          | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-641488 sudo crio                          | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-641488                                    | kindnet-641488            | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC | 15 Aug 24 01:51 UTC |
	| start   | -p enable-default-cni-641488                         | enable-default-cni-641488 | jenkins | v1.33.1 | 15 Aug 24 01:51 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:51:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:51:47.222103   78910 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:51:47.222210   78910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:51:47.222219   78910 out.go:304] Setting ErrFile to fd 2...
	I0815 01:51:47.222225   78910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:51:47.222478   78910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:51:47.223206   78910 out.go:298] Setting JSON to false
	I0815 01:51:47.224337   78910 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9252,"bootTime":1723677455,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:51:47.224398   78910 start.go:139] virtualization: kvm guest
	I0815 01:51:47.226094   78910 out.go:177] * [enable-default-cni-641488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:51:47.227523   78910 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:51:47.227518   78910 notify.go:220] Checking for updates...
	I0815 01:51:47.230114   78910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:51:47.231355   78910 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:51:47.232645   78910 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:51:47.234415   78910 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:51:47.235713   78910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:51:47.237294   78910 config.go:182] Loaded profile config "calico-641488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:51:47.237430   78910 config.go:182] Loaded profile config "custom-flannel-641488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:51:47.237525   78910 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:51:47.237642   78910 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:51:47.277551   78910 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 01:51:47.278657   78910 start.go:297] selected driver: kvm2
	I0815 01:51:47.278685   78910 start.go:901] validating driver "kvm2" against <nil>
	I0815 01:51:47.278701   78910 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:51:47.279418   78910 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:51:47.279512   78910 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:51:47.296008   78910 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:51:47.296062   78910 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0815 01:51:47.296281   78910 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0815 01:51:47.296306   78910 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:51:47.296353   78910 cni.go:84] Creating CNI manager for "bridge"
	I0815 01:51:47.296358   78910 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 01:51:47.296409   78910 start.go:340] cluster config:
	{Name:enable-default-cni-641488 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-641488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:51:47.296507   78910 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:51:47.298181   78910 out.go:177] * Starting "enable-default-cni-641488" primary control-plane node in "enable-default-cni-641488" cluster
	I0815 01:51:43.076875   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | domain custom-flannel-641488 has defined MAC address 52:54:00:e8:44:f3 in network mk-custom-flannel-641488
	I0815 01:51:43.077385   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | unable to find current IP address of domain custom-flannel-641488 in network mk-custom-flannel-641488
	I0815 01:51:43.077420   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | I0815 01:51:43.077305   77671 retry.go:31] will retry after 753.381214ms: waiting for machine to come up
	I0815 01:51:43.832761   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | domain custom-flannel-641488 has defined MAC address 52:54:00:e8:44:f3 in network mk-custom-flannel-641488
	I0815 01:51:43.833228   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | unable to find current IP address of domain custom-flannel-641488 in network mk-custom-flannel-641488
	I0815 01:51:43.833248   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | I0815 01:51:43.833188   77671 retry.go:31] will retry after 1.455051213s: waiting for machine to come up
	I0815 01:51:45.289601   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | domain custom-flannel-641488 has defined MAC address 52:54:00:e8:44:f3 in network mk-custom-flannel-641488
	I0815 01:51:45.290037   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | unable to find current IP address of domain custom-flannel-641488 in network mk-custom-flannel-641488
	I0815 01:51:45.290062   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | I0815 01:51:45.289996   77671 retry.go:31] will retry after 1.640255295s: waiting for machine to come up
	I0815 01:51:46.962957   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | domain custom-flannel-641488 has defined MAC address 52:54:00:e8:44:f3 in network mk-custom-flannel-641488
	I0815 01:51:46.963662   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | unable to find current IP address of domain custom-flannel-641488 in network mk-custom-flannel-641488
	I0815 01:51:46.963681   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | I0815 01:51:46.963618   77671 retry.go:31] will retry after 2.185615678s: waiting for machine to come up
	I0815 01:51:47.978929   75558 pod_ready.go:92] pod "calico-kube-controllers-7fbd86d5c5-8mshk" in "kube-system" namespace has status "Ready":"True"
	I0815 01:51:47.978950   75558 pod_ready.go:81] duration metric: took 17.005666258s for pod "calico-kube-controllers-7fbd86d5c5-8mshk" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:47.978959   75558 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-rw6zm" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:49.986408   75558 pod_ready.go:102] pod "calico-node-rw6zm" in "kube-system" namespace has status "Ready":"False"
	I0815 01:51:51.486689   75558 pod_ready.go:92] pod "calico-node-rw6zm" in "kube-system" namespace has status "Ready":"True"
	I0815 01:51:51.486716   75558 pod_ready.go:81] duration metric: took 3.507750205s for pod "calico-node-rw6zm" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:51.486727   75558 pod_ready.go:78] waiting up to 15m0s for pod "coredns-6f6b679f8f-rn6kg" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:51.491292   75558 pod_ready.go:92] pod "coredns-6f6b679f8f-rn6kg" in "kube-system" namespace has status "Ready":"True"
	I0815 01:51:51.491314   75558 pod_ready.go:81] duration metric: took 4.580582ms for pod "coredns-6f6b679f8f-rn6kg" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:51.491336   75558 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-641488" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:51.495371   75558 pod_ready.go:92] pod "etcd-calico-641488" in "kube-system" namespace has status "Ready":"True"
	I0815 01:51:51.495390   75558 pod_ready.go:81] duration metric: took 4.046558ms for pod "etcd-calico-641488" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:51.495401   75558 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-641488" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:51.499419   75558 pod_ready.go:92] pod "kube-apiserver-calico-641488" in "kube-system" namespace has status "Ready":"True"
	I0815 01:51:51.499440   75558 pod_ready.go:81] duration metric: took 4.031391ms for pod "kube-apiserver-calico-641488" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:51.499451   75558 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-641488" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:51.503064   75558 pod_ready.go:92] pod "kube-controller-manager-calico-641488" in "kube-system" namespace has status "Ready":"True"
	I0815 01:51:51.503083   75558 pod_ready.go:81] duration metric: took 3.625251ms for pod "kube-controller-manager-calico-641488" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:51.503092   75558 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-pgzc6" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:47.299366   78910 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:51:47.299438   78910 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:51:47.299459   78910 cache.go:56] Caching tarball of preloaded images
	I0815 01:51:47.299594   78910 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:51:47.299613   78910 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:51:47.299731   78910 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/enable-default-cni-641488/config.json ...
	I0815 01:51:47.299753   78910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/enable-default-cni-641488/config.json: {Name:mk687251bc6c8dc92a10a165b75d22713b5a4e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:51:47.299928   78910 start.go:360] acquireMachinesLock for enable-default-cni-641488: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:51:49.150549   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | domain custom-flannel-641488 has defined MAC address 52:54:00:e8:44:f3 in network mk-custom-flannel-641488
	I0815 01:51:49.151258   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | unable to find current IP address of domain custom-flannel-641488 in network mk-custom-flannel-641488
	I0815 01:51:49.151288   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | I0815 01:51:49.151209   77671 retry.go:31] will retry after 2.76666072s: waiting for machine to come up
	I0815 01:51:51.919201   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | domain custom-flannel-641488 has defined MAC address 52:54:00:e8:44:f3 in network mk-custom-flannel-641488
	I0815 01:51:51.919604   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | unable to find current IP address of domain custom-flannel-641488 in network mk-custom-flannel-641488
	I0815 01:51:51.919634   77629 main.go:141] libmachine: (custom-flannel-641488) DBG | I0815 01:51:51.919562   77671 retry.go:31] will retry after 3.067802139s: waiting for machine to come up
	I0815 01:51:51.883873   75558 pod_ready.go:92] pod "kube-proxy-pgzc6" in "kube-system" namespace has status "Ready":"True"
	I0815 01:51:51.883901   75558 pod_ready.go:81] duration metric: took 380.801922ms for pod "kube-proxy-pgzc6" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:51.883913   75558 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-641488" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:52.283103   75558 pod_ready.go:92] pod "kube-scheduler-calico-641488" in "kube-system" namespace has status "Ready":"True"
	I0815 01:51:52.283139   75558 pod_ready.go:81] duration metric: took 399.215643ms for pod "kube-scheduler-calico-641488" in "kube-system" namespace to be "Ready" ...
	I0815 01:51:52.283155   75558 pod_ready.go:38] duration metric: took 21.317759277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:51:52.283175   75558 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:51:52.283247   75558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:51:52.306701   75558 api_server.go:72] duration metric: took 30.814635476s to wait for apiserver process to appear ...
	I0815 01:51:52.306724   75558 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:51:52.306744   75558 api_server.go:253] Checking apiserver healthz at https://192.168.72.47:8443/healthz ...
	I0815 01:51:52.312248   75558 api_server.go:279] https://192.168.72.47:8443/healthz returned 200:
	ok
	I0815 01:51:52.313384   75558 api_server.go:141] control plane version: v1.31.0
	I0815 01:51:52.313411   75558 api_server.go:131] duration metric: took 6.67935ms to wait for apiserver health ...
	I0815 01:51:52.313422   75558 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:51:52.487298   75558 system_pods.go:59] 9 kube-system pods found
	I0815 01:51:52.487332   75558 system_pods.go:61] "calico-kube-controllers-7fbd86d5c5-8mshk" [5e859106-5866-40b6-ac5a-ef3a7fa50a1d] Running
	I0815 01:51:52.487340   75558 system_pods.go:61] "calico-node-rw6zm" [e5f02a5c-9f41-4b02-aa66-17d5e34e3ab7] Running
	I0815 01:51:52.487344   75558 system_pods.go:61] "coredns-6f6b679f8f-rn6kg" [a8fcd496-090c-4622-9f4d-2aacea9bc69e] Running
	I0815 01:51:52.487347   75558 system_pods.go:61] "etcd-calico-641488" [4beaaa15-f39c-49d7-baa0-c23db98d3919] Running
	I0815 01:51:52.487351   75558 system_pods.go:61] "kube-apiserver-calico-641488" [4fb94da9-a5cf-448a-adfc-adb5a4bbdadb] Running
	I0815 01:51:52.487355   75558 system_pods.go:61] "kube-controller-manager-calico-641488" [de08eb8f-fe33-4274-9eb6-b2714f555ca4] Running
	I0815 01:51:52.487357   75558 system_pods.go:61] "kube-proxy-pgzc6" [3d903c6f-16bf-40e8-b007-38636dacd682] Running
	I0815 01:51:52.487360   75558 system_pods.go:61] "kube-scheduler-calico-641488" [952d1d1f-6731-4bcf-8680-63815246b49c] Running
	I0815 01:51:52.487363   75558 system_pods.go:61] "storage-provisioner" [bf9096ca-b14c-45dc-a2f4-86e486882524] Running
	I0815 01:51:52.487369   75558 system_pods.go:74] duration metric: took 173.936412ms to wait for pod list to return data ...
	I0815 01:51:52.487379   75558 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:51:52.683798   75558 default_sa.go:45] found service account: "default"
	I0815 01:51:52.683822   75558 default_sa.go:55] duration metric: took 196.438235ms for default service account to be created ...
	I0815 01:51:52.683831   75558 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:51:52.885926   75558 system_pods.go:86] 9 kube-system pods found
	I0815 01:51:52.885955   75558 system_pods.go:89] "calico-kube-controllers-7fbd86d5c5-8mshk" [5e859106-5866-40b6-ac5a-ef3a7fa50a1d] Running
	I0815 01:51:52.885960   75558 system_pods.go:89] "calico-node-rw6zm" [e5f02a5c-9f41-4b02-aa66-17d5e34e3ab7] Running
	I0815 01:51:52.885965   75558 system_pods.go:89] "coredns-6f6b679f8f-rn6kg" [a8fcd496-090c-4622-9f4d-2aacea9bc69e] Running
	I0815 01:51:52.885968   75558 system_pods.go:89] "etcd-calico-641488" [4beaaa15-f39c-49d7-baa0-c23db98d3919] Running
	I0815 01:51:52.885975   75558 system_pods.go:89] "kube-apiserver-calico-641488" [4fb94da9-a5cf-448a-adfc-adb5a4bbdadb] Running
	I0815 01:51:52.885979   75558 system_pods.go:89] "kube-controller-manager-calico-641488" [de08eb8f-fe33-4274-9eb6-b2714f555ca4] Running
	I0815 01:51:52.885983   75558 system_pods.go:89] "kube-proxy-pgzc6" [3d903c6f-16bf-40e8-b007-38636dacd682] Running
	I0815 01:51:52.885987   75558 system_pods.go:89] "kube-scheduler-calico-641488" [952d1d1f-6731-4bcf-8680-63815246b49c] Running
	I0815 01:51:52.885992   75558 system_pods.go:89] "storage-provisioner" [bf9096ca-b14c-45dc-a2f4-86e486882524] Running
	I0815 01:51:52.885998   75558 system_pods.go:126] duration metric: took 202.163411ms to wait for k8s-apps to be running ...
	I0815 01:51:52.886005   75558 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:51:52.886044   75558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:51:52.900972   75558 system_svc.go:56] duration metric: took 14.95761ms WaitForService to wait for kubelet
	I0815 01:51:52.901006   75558 kubeadm.go:582] duration metric: took 31.40894297s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:51:52.901028   75558 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:51:53.083701   75558 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:51:53.083731   75558 node_conditions.go:123] node cpu capacity is 2
	I0815 01:51:53.083744   75558 node_conditions.go:105] duration metric: took 182.710641ms to run NodePressure ...
	I0815 01:51:53.083756   75558 start.go:241] waiting for startup goroutines ...
	I0815 01:51:53.083762   75558 start.go:246] waiting for cluster config update ...
	I0815 01:51:53.083771   75558 start.go:255] writing updated cluster config ...
	I0815 01:51:53.084030   75558 ssh_runner.go:195] Run: rm -f paused
	I0815 01:51:53.131468   75558 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:51:53.133348   75558 out.go:177] * Done! kubectl is now configured to use "calico-641488" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.082432490Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:eb530c4afe1db9e09b54d1a05218807247888f8a08f1d6358ab09dd8dfd306e9,Metadata:&PodSandboxMetadata{Name:busybox,Uid:a262790f-9f48-41d8-ac94-90f4f9e60087,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723685388186719081,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a262790f-9f48-41d8-ac94-90f4f9e60087,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T01:29:40.324086756Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:76dceb9cb96ddaa34e162f65928a3338af250c468ca8a6bddde14f3d1c8d0d87,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-gxdqt,Uid:2d8541f1-a07e-4d34-80ae-f7b2529b560b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:172368
5388177643440,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-gxdqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8541f1-a07e-4d34-80ae-f7b2529b560b,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T01:29:40.324074287Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ad60ada56c6c2c46af12eff2b34ec9332e9c72b67fae7e546826b9a577422418,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-gdpxh,Uid:e263386d-fda4-4841-ace9-81a1ba4e8a81,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723685386373610090,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-gdpxh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e263386d-fda4-4841-ace9-81a1ba4e8a81,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15
T01:29:40.324083445Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d5929cbb-30bf-4ce8-bd14-7e687e83492b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723685380643599957,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-7e687e83492b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-15T01:29:40.324085046Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e9cf9f72683fd7d6ca51d895dd765c3acc38b8226aeaaa8ab8da61bae766f084,Metadata:&PodSandboxMetadata{Name:kube-proxy-s8mfb,Uid:6897db99-a461-4261-a7b4-17f13c72a724,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723685380639614511,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-s8mfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6897db99-a461-4261-a7b4-17f13c72a724,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2024-08-15T01:29:40.324080160Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab70c54bebffcd4f1c2c21bf2ab10bf06ae2df230446af80f22c8bb667881871,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-018537,Uid:9e179917b807224665cb9060b1088131,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723685374815762147,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e179917b807224665cb9060b1088131,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9e179917b807224665cb9060b1088131,kubernetes.io/config.seen: 2024-08-15T01:29:34.318302002Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4c7ee67c2d22350bc274710b11c8d2b0165d0bc2855d7400e1cf9b5133419cdf,Metadata:&PodSandboxMetadata{Name:kube-scheduler-defaul
t-k8s-diff-port-018537,Uid:02f8d93b60baefc4b535da87456e33f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723685374809578051,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8d93b60baefc4b535da87456e33f3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 02f8d93b60baefc4b535da87456e33f3,kubernetes.io/config.seen: 2024-08-15T01:29:34.318302970Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c255231cfd07789193c3b191fa9f31c35cce8cb1223a2e782ec722d68bae6703,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-018537,Uid:7895bb76a3dbe7d8ea2d01f06cb04572,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723685374806186640,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018537,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7895bb76a3dbe7d8ea2d01f06cb04572,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.223:2379,kubernetes.io/config.hash: 7895bb76a3dbe7d8ea2d01f06cb04572,kubernetes.io/config.seen: 2024-08-15T01:29:34.383956639Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:24db94d899f54624e576732363c5ccb02af6ccd0681f53ef8c7d103d44030416,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-018537,Uid:973ebf14322aafa70988c1d6c9514109,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723685374781705900,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973ebf14322aafa70988c1d6c9514109,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-a
ddress.endpoint: 192.168.39.223:8444,kubernetes.io/config.hash: 973ebf14322aafa70988c1d6c9514109,kubernetes.io/config.seen: 2024-08-15T01:29:34.318298505Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=059f380d-4873-44e3-857e-079e22b0d99f name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.083385165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e87a5bcd-04c8-4872-8454-99f7b4bdf4f4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.083442247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e87a5bcd-04c8-4872-8454-99f7b4bdf4f4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.083911901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685411582643300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91277761e8354d0469aff1995799cbbe87fb69a934b39d1a16eb8aaef4463e03,PodSandboxId:eb530c4afe1db9e09b54d1a05218807247888f8a08f1d6358ab09dd8dfd306e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723685391215065734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a262790f-9f48-41d8-ac94-90f4f9e60087,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b,PodSandboxId:76dceb9cb96ddaa34e162f65928a3338af250c468ca8a6bddde14f3d1c8d0d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685388428618166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gxdqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8541f1-a07e-4d34-80ae-f7b2529b560b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6,PodSandboxId:e9cf9f72683fd7d6ca51d895dd765c3acc38b8226aeaaa8ab8da61bae766f084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685380862388453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s8mfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6897db99-a
461-4261-a7b4-17f13c72a724,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723685380782374985,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-
7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771,PodSandboxId:24db94d899f54624e576732363c5ccb02af6ccd0681f53ef8c7d103d44030416,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685376248763843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973ebf14322aafa70988c1
d6c9514109,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049,PodSandboxId:ab70c54bebffcd4f1c2c21bf2ab10bf06ae2df230446af80f22c8bb667881871,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685376247296172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 9e179917b807224665cb9060b1088131,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872,PodSandboxId:c255231cfd07789193c3b191fa9f31c35cce8cb1223a2e782ec722d68bae6703,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685376225530549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7895bb76a3dbe7d8ea2d01f06cb04
572,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0,PodSandboxId:4c7ee67c2d22350bc274710b11c8d2b0165d0bc2855d7400e1cf9b5133419cdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685376233177246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8d93b60baefc4b535da87456e33f
3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e87a5bcd-04c8-4872-8454-99f7b4bdf4f4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.092021872Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=701abb14-5c50-4ebc-9649-b60050dd3988 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.092122264Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=701abb14-5c50-4ebc-9649-b60050dd3988 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.093103372Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7c5b07a-b6b1-4960-96cd-c0f61f59812e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.093463329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686718093443249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7c5b07a-b6b1-4960-96cd-c0f61f59812e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.093892554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c37dc39-9d6b-4ece-8a0e-218de3f693bf name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.093937624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c37dc39-9d6b-4ece-8a0e-218de3f693bf name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.094209279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685411582643300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91277761e8354d0469aff1995799cbbe87fb69a934b39d1a16eb8aaef4463e03,PodSandboxId:eb530c4afe1db9e09b54d1a05218807247888f8a08f1d6358ab09dd8dfd306e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723685391215065734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a262790f-9f48-41d8-ac94-90f4f9e60087,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b,PodSandboxId:76dceb9cb96ddaa34e162f65928a3338af250c468ca8a6bddde14f3d1c8d0d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685388428618166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gxdqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8541f1-a07e-4d34-80ae-f7b2529b560b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6,PodSandboxId:e9cf9f72683fd7d6ca51d895dd765c3acc38b8226aeaaa8ab8da61bae766f084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685380862388453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s8mfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6897db99-a
461-4261-a7b4-17f13c72a724,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723685380782374985,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-
7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771,PodSandboxId:24db94d899f54624e576732363c5ccb02af6ccd0681f53ef8c7d103d44030416,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685376248763843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973ebf14322aafa70988c1
d6c9514109,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049,PodSandboxId:ab70c54bebffcd4f1c2c21bf2ab10bf06ae2df230446af80f22c8bb667881871,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685376247296172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 9e179917b807224665cb9060b1088131,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872,PodSandboxId:c255231cfd07789193c3b191fa9f31c35cce8cb1223a2e782ec722d68bae6703,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685376225530549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7895bb76a3dbe7d8ea2d01f06cb04
572,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0,PodSandboxId:4c7ee67c2d22350bc274710b11c8d2b0165d0bc2855d7400e1cf9b5133419cdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685376233177246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8d93b60baefc4b535da87456e33f
3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c37dc39-9d6b-4ece-8a0e-218de3f693bf name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.133491568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27eab6ed-57ba-4b36-8ee1-ee26f4d48ba9 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.133561937Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27eab6ed-57ba-4b36-8ee1-ee26f4d48ba9 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.134514553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ecebe20d-2cab-4a94-9497-bf9d7f3a2d87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.134894975Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686718134872519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecebe20d-2cab-4a94-9497-bf9d7f3a2d87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.135534271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d98ed101-aa11-4a81-a274-aff522e5a541 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.135598136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d98ed101-aa11-4a81-a274-aff522e5a541 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.135819342Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685411582643300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91277761e8354d0469aff1995799cbbe87fb69a934b39d1a16eb8aaef4463e03,PodSandboxId:eb530c4afe1db9e09b54d1a05218807247888f8a08f1d6358ab09dd8dfd306e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723685391215065734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a262790f-9f48-41d8-ac94-90f4f9e60087,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b,PodSandboxId:76dceb9cb96ddaa34e162f65928a3338af250c468ca8a6bddde14f3d1c8d0d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685388428618166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gxdqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8541f1-a07e-4d34-80ae-f7b2529b560b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6,PodSandboxId:e9cf9f72683fd7d6ca51d895dd765c3acc38b8226aeaaa8ab8da61bae766f084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685380862388453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s8mfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6897db99-a
461-4261-a7b4-17f13c72a724,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723685380782374985,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-
7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771,PodSandboxId:24db94d899f54624e576732363c5ccb02af6ccd0681f53ef8c7d103d44030416,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685376248763843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973ebf14322aafa70988c1
d6c9514109,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049,PodSandboxId:ab70c54bebffcd4f1c2c21bf2ab10bf06ae2df230446af80f22c8bb667881871,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685376247296172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 9e179917b807224665cb9060b1088131,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872,PodSandboxId:c255231cfd07789193c3b191fa9f31c35cce8cb1223a2e782ec722d68bae6703,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685376225530549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7895bb76a3dbe7d8ea2d01f06cb04
572,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0,PodSandboxId:4c7ee67c2d22350bc274710b11c8d2b0165d0bc2855d7400e1cf9b5133419cdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685376233177246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8d93b60baefc4b535da87456e33f
3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d98ed101-aa11-4a81-a274-aff522e5a541 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.166151094Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c2ef8ee-6309-4442-9db6-8b18b09ddbd5 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.166230724Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c2ef8ee-6309-4442-9db6-8b18b09ddbd5 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.167287144Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6400c11-1b92-4c18-a823-667c07baece4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.167684738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686718167661923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6400c11-1b92-4c18-a823-667c07baece4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.168176226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33001ed2-1bf0-468e-b828-13ca2cb52b26 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.168229288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33001ed2-1bf0-468e-b828-13ca2cb52b26 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:51:58 default-k8s-diff-port-018537 crio[729]: time="2024-08-15 01:51:58.168415491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685411582643300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91277761e8354d0469aff1995799cbbe87fb69a934b39d1a16eb8aaef4463e03,PodSandboxId:eb530c4afe1db9e09b54d1a05218807247888f8a08f1d6358ab09dd8dfd306e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723685391215065734,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a262790f-9f48-41d8-ac94-90f4f9e60087,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b,PodSandboxId:76dceb9cb96ddaa34e162f65928a3338af250c468ca8a6bddde14f3d1c8d0d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685388428618166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gxdqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d8541f1-a07e-4d34-80ae-f7b2529b560b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6,PodSandboxId:e9cf9f72683fd7d6ca51d895dd765c3acc38b8226aeaaa8ab8da61bae766f084,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685380862388453,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s8mfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6897db99-a
461-4261-a7b4-17f13c72a724,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f,PodSandboxId:d8dc76e0e139cb9bb6183fb5c11946612fe8e61eacb4309ed5044012b4dfbbbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723685380782374985,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5929cbb-30bf-4ce8-bd14-
7e687e83492b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771,PodSandboxId:24db94d899f54624e576732363c5ccb02af6ccd0681f53ef8c7d103d44030416,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685376248763843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973ebf14322aafa70988c1
d6c9514109,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049,PodSandboxId:ab70c54bebffcd4f1c2c21bf2ab10bf06ae2df230446af80f22c8bb667881871,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685376247296172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 9e179917b807224665cb9060b1088131,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872,PodSandboxId:c255231cfd07789193c3b191fa9f31c35cce8cb1223a2e782ec722d68bae6703,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685376225530549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7895bb76a3dbe7d8ea2d01f06cb04
572,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0,PodSandboxId:4c7ee67c2d22350bc274710b11c8d2b0165d0bc2855d7400e1cf9b5133419cdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685376233177246,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-018537,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02f8d93b60baefc4b535da87456e33f
3,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33001ed2-1bf0-468e-b828-13ca2cb52b26 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7e16ea21684b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   d8dc76e0e139c       storage-provisioner
	91277761e8354       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   eb530c4afe1db       busybox
	6878af069904e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   76dceb9cb96dd       coredns-6f6b679f8f-gxdqt
	451245c6ce878       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      22 minutes ago      Running             kube-proxy                1                   e9cf9f72683fd       kube-proxy-s8mfb
	51d71abfa8f5c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   d8dc76e0e139c       storage-provisioner
	9aa794b86b772       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      22 minutes ago      Running             kube-apiserver            1                   24db94d899f54       kube-apiserver-default-k8s-diff-port-018537
	2f9821e596c0d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      22 minutes ago      Running             kube-controller-manager   1                   ab70c54bebffc       kube-controller-manager-default-k8s-diff-port-018537
	a093f3ec7d6d1       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      22 minutes ago      Running             kube-scheduler            1                   4c7ee67c2d223       kube-scheduler-default-k8s-diff-port-018537
	e0cc07c948ffd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      22 minutes ago      Running             etcd                      1                   c255231cfd077       etcd-default-k8s-diff-port-018537
	
	
	==> coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45202 - 35974 "HINFO IN 4574042729287797711.619855990244093827. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010305813s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-018537
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-018537
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=default-k8s-diff-port-018537
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T01_22_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 01:22:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-018537
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:51:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 01:50:35 +0000   Thu, 15 Aug 2024 01:22:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 01:50:35 +0000   Thu, 15 Aug 2024 01:22:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 01:50:35 +0000   Thu, 15 Aug 2024 01:22:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 01:50:35 +0000   Thu, 15 Aug 2024 01:29:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.223
	  Hostname:    default-k8s-diff-port-018537
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c194d510de044c42ad01b684edef68d1
	  System UUID:                c194d510-de04-4c42-ad01-b684edef68d1
	  Boot ID:                    49eb4833-ca02-4ac6-b00c-8451d140ab04
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-6f6b679f8f-gxdqt                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-018537                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-018537             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-018537    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-s8mfb                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-018537             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-6867b74b74-gdpxh                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-018537 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-018537 event: Registered Node default-k8s-diff-port-018537 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-018537 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-018537 event: Registered Node default-k8s-diff-port-018537 in Controller
	
	
	==> dmesg <==
	[Aug15 01:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051549] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038208] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.804256] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.886090] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.514859] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.754115] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.058239] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056208] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.190920] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.132338] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.293068] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.064359] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +1.738729] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +0.067767] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.527354] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.462666] systemd-fstab-generator[1552]: Ignoring "noauto" option for root device
	[  +3.213250] kauditd_printk_skb: 64 callbacks suppressed
	[Aug15 01:30] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] <==
	{"level":"warn","ts":"2024-08-15T01:49:46.823836Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.417081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T01:49:46.823877Z","caller":"traceutil/trace.go:171","msg":"trace[1613455341] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1633; }","duration":"124.653438ms","start":"2024-08-15T01:49:46.699215Z","end":"2024-08-15T01:49:46.823869Z","steps":["trace[1613455341] 'agreement among raft nodes before linearized reading'  (duration: 124.401406ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:49:46.824282Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T01:49:46.444551Z","time spent":"379.283417ms","remote":"127.0.0.1:52404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-15T01:50:41.405343Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.421808ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13167840673980998124 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.223\" mod_revision:1669 > success:<request_put:<key:\"/registry/masterleases/192.168.39.223\" value_size:67 lease:3944468637126222314 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.223\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T01:50:41.405779Z","caller":"traceutil/trace.go:171","msg":"trace[1663283931] linearizableReadLoop","detail":"{readStateIndex:1978; appliedIndex:1977; }","duration":"202.508907ms","start":"2024-08-15T01:50:41.203238Z","end":"2024-08-15T01:50:41.405747Z","steps":["trace[1663283931] 'read index received'  (duration: 57.433489ms)","trace[1663283931] 'applied index is now lower than readState.Index'  (duration: 145.073693ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T01:50:41.405835Z","caller":"traceutil/trace.go:171","msg":"trace[165628125] transaction","detail":"{read_only:false; response_revision:1678; number_of_response:1; }","duration":"206.700929ms","start":"2024-08-15T01:50:41.199104Z","end":"2024-08-15T01:50:41.405805Z","steps":["trace[165628125] 'process raft request'  (duration: 61.629057ms)","trace[165628125] 'compare'  (duration: 144.198271ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T01:50:41.406164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.905535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-08-15T01:50:41.407643Z","caller":"traceutil/trace.go:171","msg":"trace[1646457453] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1678; }","duration":"204.436457ms","start":"2024-08-15T01:50:41.203197Z","end":"2024-08-15T01:50:41.407633Z","steps":["trace[1646457453] 'agreement among raft nodes before linearized reading'  (duration: 202.67006ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:50:41.406286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.877889ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T01:50:41.408447Z","caller":"traceutil/trace.go:171","msg":"trace[1105857169] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1678; }","duration":"170.047725ms","start":"2024-08-15T01:50:41.238391Z","end":"2024-08-15T01:50:41.408439Z","steps":["trace[1105857169] 'agreement among raft nodes before linearized reading'  (duration: 167.830374ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T01:51:05.934516Z","caller":"traceutil/trace.go:171","msg":"trace[679757190] linearizableReadLoop","detail":"{readStateIndex:2002; appliedIndex:2001; }","duration":"243.16872ms","start":"2024-08-15T01:51:05.691334Z","end":"2024-08-15T01:51:05.934502Z","steps":["trace[679757190] 'read index received'  (duration: 242.991114ms)","trace[679757190] 'applied index is now lower than readState.Index'  (duration: 175.282µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T01:51:05.934688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.336956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T01:51:05.934730Z","caller":"traceutil/trace.go:171","msg":"trace[1665107784] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1698; }","duration":"243.395174ms","start":"2024-08-15T01:51:05.691329Z","end":"2024-08-15T01:51:05.934724Z","steps":["trace[1665107784] 'agreement among raft nodes before linearized reading'  (duration: 243.32281ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T01:51:05.934778Z","caller":"traceutil/trace.go:171","msg":"trace[717107957] transaction","detail":"{read_only:false; response_revision:1698; number_of_response:1; }","duration":"354.319118ms","start":"2024-08-15T01:51:05.580445Z","end":"2024-08-15T01:51:05.934764Z","steps":["trace[717107957] 'process raft request'  (duration: 353.949794ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:51:05.935415Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T01:51:05.580429Z","time spent":"354.928539ms","remote":"127.0.0.1:52570","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1697 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-15T01:51:27.886576Z","caller":"traceutil/trace.go:171","msg":"trace[340062343] linearizableReadLoop","detail":"{readStateIndex:2026; appliedIndex:2025; }","duration":"195.766664ms","start":"2024-08-15T01:51:27.690790Z","end":"2024-08-15T01:51:27.886557Z","steps":["trace[340062343] 'read index received'  (duration: 195.597563ms)","trace[340062343] 'applied index is now lower than readState.Index'  (duration: 168.539µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T01:51:27.886850Z","caller":"traceutil/trace.go:171","msg":"trace[1200144233] transaction","detail":"{read_only:false; response_revision:1717; number_of_response:1; }","duration":"252.598416ms","start":"2024-08-15T01:51:27.634240Z","end":"2024-08-15T01:51:27.886838Z","steps":["trace[1200144233] 'process raft request'  (duration: 252.196588ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:51:27.887108Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.299596ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T01:51:27.887140Z","caller":"traceutil/trace.go:171","msg":"trace[1700797755] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1717; }","duration":"196.344029ms","start":"2024-08-15T01:51:27.690786Z","end":"2024-08-15T01:51:27.887130Z","steps":["trace[1700797755] 'agreement among raft nodes before linearized reading'  (duration: 196.278149ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:51:27.887281Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.339718ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-15T01:51:27.887309Z","caller":"traceutil/trace.go:171","msg":"trace[1119246518] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; response_count:0; response_revision:1717; }","duration":"188.371637ms","start":"2024-08-15T01:51:27.698931Z","end":"2024-08-15T01:51:27.887303Z","steps":["trace[1119246518] 'agreement among raft nodes before linearized reading'  (duration: 188.323735ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:51:36.824121Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.240892ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13167840673980998460 > lease_revoke:<id:36bd9153a793e2d6>","response":"size:28"}
	{"level":"info","ts":"2024-08-15T01:51:36.824313Z","caller":"traceutil/trace.go:171","msg":"trace[758966796] linearizableReadLoop","detail":"{readStateIndex:2034; appliedIndex:2033; }","duration":"134.905939ms","start":"2024-08-15T01:51:36.689394Z","end":"2024-08-15T01:51:36.824300Z","steps":["trace[758966796] 'read index received'  (duration: 25.165µs)","trace[758966796] 'applied index is now lower than readState.Index'  (duration: 134.879769ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T01:51:36.824475Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.083601ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T01:51:36.824522Z","caller":"traceutil/trace.go:171","msg":"trace[1575769705] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1723; }","duration":"135.143744ms","start":"2024-08-15T01:51:36.689372Z","end":"2024-08-15T01:51:36.824516Z","steps":["trace[1575769705] 'agreement among raft nodes before linearized reading'  (duration: 135.060835ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:51:58 up 22 min,  0 users,  load average: 0.10, 0.15, 0.10
	Linux default-k8s-diff-port-018537 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] <==
	I0815 01:47:40.746512       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:47:40.746562       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:49:39.745243       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:49:39.745391       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0815 01:49:40.747154       1 handler_proxy.go:99] no RequestInfo found in the context
	W0815 01:49:40.747289       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:49:40.747594       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0815 01:49:40.748100       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:49:40.749198       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:49:40.749293       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:50:40.749922       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:50:40.750140       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0815 01:50:40.750247       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:50:40.750282       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0815 01:50:40.751316       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:50:40.751449       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] <==
	E0815 01:46:43.440441       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:46:43.845636       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:47:13.447536       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:47:13.853247       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:47:43.453911       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:47:43.860737       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:48:13.460554       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:48:13.868603       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:48:43.466708       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:48:43.876099       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:49:13.473017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:49:13.884264       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:49:43.479245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:49:43.893295       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:50:13.486286       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:50:13.902273       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:50:35.253437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-018537"
	E0815 01:50:43.494036       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:50:43.911443       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:51:01.384243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="1.307589ms"
	I0815 01:51:13.379274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="164.214µs"
	E0815 01:51:13.500731       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:51:13.927432       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:51:43.508848       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:51:43.936575       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:29:41.113509       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 01:29:41.123156       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.223"]
	E0815 01:29:41.123327       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 01:29:41.153899       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 01:29:41.154029       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 01:29:41.154113       1 server_linux.go:169] "Using iptables Proxier"
	I0815 01:29:41.156545       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 01:29:41.156843       1 server.go:483] "Version info" version="v1.31.0"
	I0815 01:29:41.156889       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:29:41.158340       1 config.go:197] "Starting service config controller"
	I0815 01:29:41.158407       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 01:29:41.158449       1 config.go:104] "Starting endpoint slice config controller"
	I0815 01:29:41.158465       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 01:29:41.160665       1 config.go:326] "Starting node config controller"
	I0815 01:29:41.160689       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 01:29:41.259499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 01:29:41.259519       1 shared_informer.go:320] Caches are synced for service config
	I0815 01:29:41.261031       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] <==
	I0815 01:29:37.246079       1 serving.go:386] Generated self-signed cert in-memory
	W0815 01:29:39.700623       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 01:29:39.700756       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 01:29:39.700827       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 01:29:39.700871       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 01:29:39.732061       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 01:29:39.732204       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:29:39.734470       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 01:29:39.734642       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 01:29:39.734680       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 01:29:39.734710       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 01:29:39.835018       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 01:50:47 default-k8s-diff-port-018537 kubelet[937]: E0815 01:50:47.383511     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-gdpxh" podUID="e263386d-fda4-4841-ace9-81a1ba4e8a81"
	Aug 15 01:50:54 default-k8s-diff-port-018537 kubelet[937]: E0815 01:50:54.702086     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686654701600789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:50:54 default-k8s-diff-port-018537 kubelet[937]: E0815 01:50:54.702112     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686654701600789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:51:01 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:01.365962     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gdpxh" podUID="e263386d-fda4-4841-ace9-81a1ba4e8a81"
	Aug 15 01:51:04 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:04.704338     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686664703905027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:51:04 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:04.704908     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686664703905027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:51:13 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:13.366249     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gdpxh" podUID="e263386d-fda4-4841-ace9-81a1ba4e8a81"
	Aug 15 01:51:14 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:14.707166     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686674706443186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:51:14 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:14.707506     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686674706443186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:51:24 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:24.709815     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686684709388661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:51:24 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:24.709874     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686684709388661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:51:26 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:26.370830     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gdpxh" podUID="e263386d-fda4-4841-ace9-81a1ba4e8a81"
	Aug 15 01:51:34 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:34.397176     937 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 01:51:34 default-k8s-diff-port-018537 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 01:51:34 default-k8s-diff-port-018537 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 01:51:34 default-k8s-diff-port-018537 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 01:51:34 default-k8s-diff-port-018537 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 01:51:34 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:34.711889     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686694711423492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:51:34 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:34.711914     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686694711423492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:51:40 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:40.366442     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gdpxh" podUID="e263386d-fda4-4841-ace9-81a1ba4e8a81"
	Aug 15 01:51:44 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:44.715225     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686704714678279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:51:44 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:44.715495     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686704714678279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:51:51 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:51.365668     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gdpxh" podUID="e263386d-fda4-4841-ace9-81a1ba4e8a81"
	Aug 15 01:51:54 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:54.717200     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686714716672749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:51:54 default-k8s-diff-port-018537 kubelet[937]: E0815 01:51:54.717243     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686714716672749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] <==
	I0815 01:29:40.970652       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0815 01:30:10.977180       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] <==
	I0815 01:30:11.678456       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 01:30:11.687519       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 01:30:11.687615       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 01:30:29.085820       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 01:30:29.086063       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-018537_5780928b-b504-4fad-8f99-0862bbdbcc89!
	I0815 01:30:29.086626       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7c2ace39-2e0f-490f-b0d0-c568fba5964f", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-018537_5780928b-b504-4fad-8f99-0862bbdbcc89 became leader
	I0815 01:30:29.187622       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-018537_5780928b-b504-4fad-8f99-0862bbdbcc89!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-018537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-gdpxh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-018537 describe pod metrics-server-6867b74b74-gdpxh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-018537 describe pod metrics-server-6867b74b74-gdpxh: exit status 1 (65.785287ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-gdpxh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-018537 describe pod metrics-server-6867b74b74-gdpxh: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (530.46s)
E0815 01:53:22.726904   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:22.733287   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:22.744724   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:22.766846   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:22.774258   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:22.808808   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:22.890260   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:23.051587   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:23.373419   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:24.015413   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:25.297329   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:27.859536   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:53:32.981129   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (428.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-190398 -n embed-certs-190398
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-15 01:50:28.942243472 +0000 UTC m=+6303.642475068
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-190398 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-190398 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.433µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-190398 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190398 -n embed-certs-190398
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-190398 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-190398 logs -n 25: (1.16344339s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-018537  | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-884893                  | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-190398                 | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-390782             | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018537       | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:26 UTC | 15 Aug 24 01:34 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:48 UTC | 15 Aug 24 01:48 UTC |
	| start   | -p newest-cni-840156 --memory=2200 --alsologtostderr   | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:48 UTC | 15 Aug 24 01:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-840156             | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC | 15 Aug 24 01:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-840156                                   | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC | 15 Aug 24 01:49 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-840156                  | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC | 15 Aug 24 01:49 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-840156 --memory=2200 --alsologtostderr   | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC | 15 Aug 24 01:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC | 15 Aug 24 01:49 UTC |
	| start   | -p auto-641488 --memory=3072                           | auto-641488                  | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | newest-cni-840156 image list                           | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC | 15 Aug 24 01:49 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-840156                                   | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC | 15 Aug 24 01:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-840156                                   | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC | 15 Aug 24 01:49 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-840156                                   | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC | 15 Aug 24 01:50 UTC |
	| delete  | -p newest-cni-840156                                   | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:50 UTC | 15 Aug 24 01:50 UTC |
	| start   | -p kindnet-641488                                      | kindnet-641488               | jenkins | v1.33.1 | 15 Aug 24 01:50 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:50:00
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:50:00.886685   75004 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:50:00.886941   75004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:50:00.886950   75004 out.go:304] Setting ErrFile to fd 2...
	I0815 01:50:00.886973   75004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:50:00.887204   75004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:50:00.887801   75004 out.go:298] Setting JSON to false
	I0815 01:50:00.888814   75004 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9146,"bootTime":1723677455,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:50:00.888875   75004 start.go:139] virtualization: kvm guest
	I0815 01:50:00.891300   75004 out.go:177] * [kindnet-641488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:50:00.892675   75004 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:50:00.892720   75004 notify.go:220] Checking for updates...
	I0815 01:50:00.894885   75004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:50:00.896049   75004 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:50:00.897137   75004 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:50:00.898159   75004 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:50:00.899209   75004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:50:00.900615   75004 config.go:182] Loaded profile config "auto-641488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:50:00.900775   75004 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:50:00.900877   75004 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:50:00.900985   75004 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:50:00.937739   75004 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 01:50:00.938943   75004 start.go:297] selected driver: kvm2
	I0815 01:50:00.938965   75004 start.go:901] validating driver "kvm2" against <nil>
	I0815 01:50:00.938979   75004 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:50:00.939678   75004 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:50:00.939776   75004 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:50:00.955306   75004 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:50:00.955371   75004 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 01:50:00.955582   75004 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:50:00.955615   75004 cni.go:84] Creating CNI manager for "kindnet"
	I0815 01:50:00.955620   75004 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 01:50:00.955683   75004 start.go:340] cluster config:
	{Name:kindnet-641488 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-641488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:50:00.955772   75004 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:50:00.957392   75004 out.go:177] * Starting "kindnet-641488" primary control-plane node in "kindnet-641488" cluster
	I0815 01:50:03.458688   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:03.459277   74260 main.go:141] libmachine: (auto-641488) Found IP for machine: 192.168.61.21
	I0815 01:50:03.459305   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has current primary IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:03.459337   74260 main.go:141] libmachine: (auto-641488) Reserving static IP address...
	I0815 01:50:03.459678   74260 main.go:141] libmachine: (auto-641488) DBG | unable to find host DHCP lease matching {name: "auto-641488", mac: "52:54:00:c7:0e:e4", ip: "192.168.61.21"} in network mk-auto-641488
	I0815 01:50:03.536021   74260 main.go:141] libmachine: (auto-641488) DBG | Getting to WaitForSSH function...
	I0815 01:50:03.536053   74260 main.go:141] libmachine: (auto-641488) Reserved static IP address: 192.168.61.21
	I0815 01:50:03.536096   74260 main.go:141] libmachine: (auto-641488) Waiting for SSH to be available...
	I0815 01:50:03.538699   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:03.539095   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:03.539123   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:03.539312   74260 main.go:141] libmachine: (auto-641488) DBG | Using SSH client type: external
	I0815 01:50:03.539339   74260 main.go:141] libmachine: (auto-641488) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/auto-641488/id_rsa (-rw-------)
	I0815 01:50:03.539376   74260 main.go:141] libmachine: (auto-641488) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/auto-641488/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:50:03.539402   74260 main.go:141] libmachine: (auto-641488) DBG | About to run SSH command:
	I0815 01:50:03.539417   74260 main.go:141] libmachine: (auto-641488) DBG | exit 0
	I0815 01:50:03.664957   74260 main.go:141] libmachine: (auto-641488) DBG | SSH cmd err, output: <nil>: 
	I0815 01:50:03.665205   74260 main.go:141] libmachine: (auto-641488) KVM machine creation complete!
	I0815 01:50:03.665533   74260 main.go:141] libmachine: (auto-641488) Calling .GetConfigRaw
	I0815 01:50:03.666024   74260 main.go:141] libmachine: (auto-641488) Calling .DriverName
	I0815 01:50:03.666206   74260 main.go:141] libmachine: (auto-641488) Calling .DriverName
	I0815 01:50:03.666365   74260 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 01:50:03.666392   74260 main.go:141] libmachine: (auto-641488) Calling .GetState
	I0815 01:50:03.667629   74260 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 01:50:03.667646   74260 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 01:50:03.667653   74260 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 01:50:03.667662   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:03.670012   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:03.670419   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:03.670454   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:03.670597   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHPort
	I0815 01:50:03.670803   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:03.670939   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:03.671113   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHUsername
	I0815 01:50:03.671288   74260 main.go:141] libmachine: Using SSH client type: native
	I0815 01:50:03.671512   74260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0815 01:50:03.671524   74260 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 01:50:03.779794   74260 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:50:03.779820   74260 main.go:141] libmachine: Detecting the provisioner...
	I0815 01:50:03.779830   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:03.782844   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:03.783235   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:03.783258   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:03.783395   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHPort
	I0815 01:50:03.783569   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:03.783738   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:03.783871   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHUsername
	I0815 01:50:03.784063   74260 main.go:141] libmachine: Using SSH client type: native
	I0815 01:50:03.784235   74260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0815 01:50:03.784245   74260 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 01:50:03.893473   74260 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 01:50:03.893599   74260 main.go:141] libmachine: found compatible host: buildroot
	I0815 01:50:03.893611   74260 main.go:141] libmachine: Provisioning with buildroot...
	I0815 01:50:03.893620   74260 main.go:141] libmachine: (auto-641488) Calling .GetMachineName
	I0815 01:50:03.893905   74260 buildroot.go:166] provisioning hostname "auto-641488"
	I0815 01:50:03.893929   74260 main.go:141] libmachine: (auto-641488) Calling .GetMachineName
	I0815 01:50:03.894122   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:03.896872   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:03.897185   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:03.897207   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:03.897459   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHPort
	I0815 01:50:03.897635   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:03.897768   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:03.897939   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHUsername
	I0815 01:50:03.898117   74260 main.go:141] libmachine: Using SSH client type: native
	I0815 01:50:03.898284   74260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0815 01:50:03.898296   74260 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-641488 && echo "auto-641488" | sudo tee /etc/hostname
	I0815 01:50:04.018383   74260 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-641488
	
	I0815 01:50:04.018417   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:04.021394   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.021749   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:04.021777   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.021937   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHPort
	I0815 01:50:04.022121   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:04.022234   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:04.022329   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHUsername
	I0815 01:50:04.022499   74260 main.go:141] libmachine: Using SSH client type: native
	I0815 01:50:04.022668   74260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0815 01:50:04.022683   74260 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-641488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-641488/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-641488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:50:00.958458   75004 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:50:00.958490   75004 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:50:00.958497   75004 cache.go:56] Caching tarball of preloaded images
	I0815 01:50:00.958586   75004 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:50:00.958598   75004 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:50:00.958692   75004 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kindnet-641488/config.json ...
	I0815 01:50:00.958708   75004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/kindnet-641488/config.json: {Name:mkd93e36d18d9b26d04e0c91d29b2e98f6621e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:50:00.958836   75004 start.go:360] acquireMachinesLock for kindnet-641488: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:50:05.125318   75004 start.go:364] duration metric: took 4.166446776s to acquireMachinesLock for "kindnet-641488"
	I0815 01:50:05.125383   75004 start.go:93] Provisioning new machine with config: &{Name:kindnet-641488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:kindnet-641488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:50:05.125547   75004 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 01:50:05.128132   75004 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 01:50:05.128336   75004 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:50:05.128387   75004 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:50:05.145364   75004 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0815 01:50:05.145885   75004 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:50:05.146432   75004 main.go:141] libmachine: Using API Version  1
	I0815 01:50:05.146453   75004 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:50:05.146800   75004 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:50:05.147000   75004 main.go:141] libmachine: (kindnet-641488) Calling .GetMachineName
	I0815 01:50:05.147170   75004 main.go:141] libmachine: (kindnet-641488) Calling .DriverName
	I0815 01:50:05.147320   75004 start.go:159] libmachine.API.Create for "kindnet-641488" (driver="kvm2")
	I0815 01:50:05.147347   75004 client.go:168] LocalClient.Create starting
	I0815 01:50:05.147379   75004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem
	I0815 01:50:05.147419   75004 main.go:141] libmachine: Decoding PEM data...
	I0815 01:50:05.147438   75004 main.go:141] libmachine: Parsing certificate...
	I0815 01:50:05.147506   75004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem
	I0815 01:50:05.147529   75004 main.go:141] libmachine: Decoding PEM data...
	I0815 01:50:05.147548   75004 main.go:141] libmachine: Parsing certificate...
	I0815 01:50:05.147573   75004 main.go:141] libmachine: Running pre-create checks...
	I0815 01:50:05.147590   75004 main.go:141] libmachine: (kindnet-641488) Calling .PreCreateCheck
	I0815 01:50:05.147881   75004 main.go:141] libmachine: (kindnet-641488) Calling .GetConfigRaw
	I0815 01:50:05.148260   75004 main.go:141] libmachine: Creating machine...
	I0815 01:50:05.148273   75004 main.go:141] libmachine: (kindnet-641488) Calling .Create
	I0815 01:50:05.148385   75004 main.go:141] libmachine: (kindnet-641488) Creating KVM machine...
	I0815 01:50:05.149830   75004 main.go:141] libmachine: (kindnet-641488) DBG | found existing default KVM network
	I0815 01:50:05.152038   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:05.150972   75070 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:bf:c5:70} reservation:<nil>}
	I0815 01:50:05.152538   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:05.152459   75070 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00034e020}
	I0815 01:50:05.152588   75004 main.go:141] libmachine: (kindnet-641488) DBG | created network xml: 
	I0815 01:50:05.152612   75004 main.go:141] libmachine: (kindnet-641488) DBG | <network>
	I0815 01:50:05.152635   75004 main.go:141] libmachine: (kindnet-641488) DBG |   <name>mk-kindnet-641488</name>
	I0815 01:50:05.152659   75004 main.go:141] libmachine: (kindnet-641488) DBG |   <dns enable='no'/>
	I0815 01:50:05.152675   75004 main.go:141] libmachine: (kindnet-641488) DBG |   
	I0815 01:50:05.152685   75004 main.go:141] libmachine: (kindnet-641488) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0815 01:50:05.152706   75004 main.go:141] libmachine: (kindnet-641488) DBG |     <dhcp>
	I0815 01:50:05.152725   75004 main.go:141] libmachine: (kindnet-641488) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0815 01:50:05.152735   75004 main.go:141] libmachine: (kindnet-641488) DBG |     </dhcp>
	I0815 01:50:05.152742   75004 main.go:141] libmachine: (kindnet-641488) DBG |   </ip>
	I0815 01:50:05.152748   75004 main.go:141] libmachine: (kindnet-641488) DBG |   
	I0815 01:50:05.152756   75004 main.go:141] libmachine: (kindnet-641488) DBG | </network>
	I0815 01:50:05.152766   75004 main.go:141] libmachine: (kindnet-641488) DBG | 
	I0815 01:50:05.158242   75004 main.go:141] libmachine: (kindnet-641488) DBG | trying to create private KVM network mk-kindnet-641488 192.168.50.0/24...
	I0815 01:50:05.229823   75004 main.go:141] libmachine: (kindnet-641488) Setting up store path in /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kindnet-641488 ...
	I0815 01:50:05.229893   75004 main.go:141] libmachine: (kindnet-641488) DBG | private KVM network mk-kindnet-641488 192.168.50.0/24 created
	I0815 01:50:05.229910   75004 main.go:141] libmachine: (kindnet-641488) Building disk image from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 01:50:05.229954   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:05.229717   75070 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:50:05.230023   75004 main.go:141] libmachine: (kindnet-641488) Downloading /home/jenkins/minikube-integration/19443-13088/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 01:50:05.492733   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:05.492597   75070 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kindnet-641488/id_rsa...
	I0815 01:50:05.680949   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:05.680809   75070 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kindnet-641488/kindnet-641488.rawdisk...
	I0815 01:50:05.680975   75004 main.go:141] libmachine: (kindnet-641488) DBG | Writing magic tar header
	I0815 01:50:05.680987   75004 main.go:141] libmachine: (kindnet-641488) DBG | Writing SSH key tar header
	I0815 01:50:05.681095   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:05.681014   75070 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kindnet-641488 ...
	I0815 01:50:05.681192   75004 main.go:141] libmachine: (kindnet-641488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kindnet-641488
	I0815 01:50:05.681219   75004 main.go:141] libmachine: (kindnet-641488) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines/kindnet-641488 (perms=drwx------)
	I0815 01:50:05.681251   75004 main.go:141] libmachine: (kindnet-641488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube/machines
	I0815 01:50:05.681267   75004 main.go:141] libmachine: (kindnet-641488) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube/machines (perms=drwxr-xr-x)
	I0815 01:50:05.681289   75004 main.go:141] libmachine: (kindnet-641488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:50:05.681310   75004 main.go:141] libmachine: (kindnet-641488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19443-13088
	I0815 01:50:05.681328   75004 main.go:141] libmachine: (kindnet-641488) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088/.minikube (perms=drwxr-xr-x)
	I0815 01:50:05.681340   75004 main.go:141] libmachine: (kindnet-641488) Setting executable bit set on /home/jenkins/minikube-integration/19443-13088 (perms=drwxrwxr-x)
	I0815 01:50:05.681350   75004 main.go:141] libmachine: (kindnet-641488) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 01:50:05.681367   75004 main.go:141] libmachine: (kindnet-641488) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 01:50:05.681379   75004 main.go:141] libmachine: (kindnet-641488) Creating domain...
	I0815 01:50:05.681392   75004 main.go:141] libmachine: (kindnet-641488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 01:50:05.681405   75004 main.go:141] libmachine: (kindnet-641488) DBG | Checking permissions on dir: /home/jenkins
	I0815 01:50:05.681416   75004 main.go:141] libmachine: (kindnet-641488) DBG | Checking permissions on dir: /home
	I0815 01:50:05.681426   75004 main.go:141] libmachine: (kindnet-641488) DBG | Skipping /home - not owner
	I0815 01:50:05.682784   75004 main.go:141] libmachine: (kindnet-641488) define libvirt domain using xml: 
	I0815 01:50:05.682807   75004 main.go:141] libmachine: (kindnet-641488) <domain type='kvm'>
	I0815 01:50:05.682818   75004 main.go:141] libmachine: (kindnet-641488)   <name>kindnet-641488</name>
	I0815 01:50:05.682830   75004 main.go:141] libmachine: (kindnet-641488)   <memory unit='MiB'>3072</memory>
	I0815 01:50:05.682841   75004 main.go:141] libmachine: (kindnet-641488)   <vcpu>2</vcpu>
	I0815 01:50:05.682849   75004 main.go:141] libmachine: (kindnet-641488)   <features>
	I0815 01:50:05.682860   75004 main.go:141] libmachine: (kindnet-641488)     <acpi/>
	I0815 01:50:05.682867   75004 main.go:141] libmachine: (kindnet-641488)     <apic/>
	I0815 01:50:05.682885   75004 main.go:141] libmachine: (kindnet-641488)     <pae/>
	I0815 01:50:05.682894   75004 main.go:141] libmachine: (kindnet-641488)     
	I0815 01:50:05.682920   75004 main.go:141] libmachine: (kindnet-641488)   </features>
	I0815 01:50:05.682942   75004 main.go:141] libmachine: (kindnet-641488)   <cpu mode='host-passthrough'>
	I0815 01:50:05.682951   75004 main.go:141] libmachine: (kindnet-641488)   
	I0815 01:50:05.682962   75004 main.go:141] libmachine: (kindnet-641488)   </cpu>
	I0815 01:50:05.682972   75004 main.go:141] libmachine: (kindnet-641488)   <os>
	I0815 01:50:05.682982   75004 main.go:141] libmachine: (kindnet-641488)     <type>hvm</type>
	I0815 01:50:05.682993   75004 main.go:141] libmachine: (kindnet-641488)     <boot dev='cdrom'/>
	I0815 01:50:05.683000   75004 main.go:141] libmachine: (kindnet-641488)     <boot dev='hd'/>
	I0815 01:50:05.683006   75004 main.go:141] libmachine: (kindnet-641488)     <bootmenu enable='no'/>
	I0815 01:50:05.683012   75004 main.go:141] libmachine: (kindnet-641488)   </os>
	I0815 01:50:05.683035   75004 main.go:141] libmachine: (kindnet-641488)   <devices>
	I0815 01:50:05.683051   75004 main.go:141] libmachine: (kindnet-641488)     <disk type='file' device='cdrom'>
	I0815 01:50:05.683068   75004 main.go:141] libmachine: (kindnet-641488)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kindnet-641488/boot2docker.iso'/>
	I0815 01:50:05.683079   75004 main.go:141] libmachine: (kindnet-641488)       <target dev='hdc' bus='scsi'/>
	I0815 01:50:05.683089   75004 main.go:141] libmachine: (kindnet-641488)       <readonly/>
	I0815 01:50:05.683098   75004 main.go:141] libmachine: (kindnet-641488)     </disk>
	I0815 01:50:05.683107   75004 main.go:141] libmachine: (kindnet-641488)     <disk type='file' device='disk'>
	I0815 01:50:05.683121   75004 main.go:141] libmachine: (kindnet-641488)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 01:50:05.683136   75004 main.go:141] libmachine: (kindnet-641488)       <source file='/home/jenkins/minikube-integration/19443-13088/.minikube/machines/kindnet-641488/kindnet-641488.rawdisk'/>
	I0815 01:50:05.683149   75004 main.go:141] libmachine: (kindnet-641488)       <target dev='hda' bus='virtio'/>
	I0815 01:50:05.683160   75004 main.go:141] libmachine: (kindnet-641488)     </disk>
	I0815 01:50:05.683169   75004 main.go:141] libmachine: (kindnet-641488)     <interface type='network'>
	I0815 01:50:05.683177   75004 main.go:141] libmachine: (kindnet-641488)       <source network='mk-kindnet-641488'/>
	I0815 01:50:05.683185   75004 main.go:141] libmachine: (kindnet-641488)       <model type='virtio'/>
	I0815 01:50:05.683191   75004 main.go:141] libmachine: (kindnet-641488)     </interface>
	I0815 01:50:05.683196   75004 main.go:141] libmachine: (kindnet-641488)     <interface type='network'>
	I0815 01:50:05.683201   75004 main.go:141] libmachine: (kindnet-641488)       <source network='default'/>
	I0815 01:50:05.683208   75004 main.go:141] libmachine: (kindnet-641488)       <model type='virtio'/>
	I0815 01:50:05.683216   75004 main.go:141] libmachine: (kindnet-641488)     </interface>
	I0815 01:50:05.683233   75004 main.go:141] libmachine: (kindnet-641488)     <serial type='pty'>
	I0815 01:50:05.683248   75004 main.go:141] libmachine: (kindnet-641488)       <target port='0'/>
	I0815 01:50:05.683259   75004 main.go:141] libmachine: (kindnet-641488)     </serial>
	I0815 01:50:05.683266   75004 main.go:141] libmachine: (kindnet-641488)     <console type='pty'>
	I0815 01:50:05.683276   75004 main.go:141] libmachine: (kindnet-641488)       <target type='serial' port='0'/>
	I0815 01:50:05.683286   75004 main.go:141] libmachine: (kindnet-641488)     </console>
	I0815 01:50:05.683306   75004 main.go:141] libmachine: (kindnet-641488)     <rng model='virtio'>
	I0815 01:50:05.683321   75004 main.go:141] libmachine: (kindnet-641488)       <backend model='random'>/dev/random</backend>
	I0815 01:50:05.683331   75004 main.go:141] libmachine: (kindnet-641488)     </rng>
	I0815 01:50:05.683341   75004 main.go:141] libmachine: (kindnet-641488)     
	I0815 01:50:05.683351   75004 main.go:141] libmachine: (kindnet-641488)     
	I0815 01:50:05.683358   75004 main.go:141] libmachine: (kindnet-641488)   </devices>
	I0815 01:50:05.683367   75004 main.go:141] libmachine: (kindnet-641488) </domain>
	I0815 01:50:05.683376   75004 main.go:141] libmachine: (kindnet-641488) 
	I0815 01:50:05.687559   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:eb:5d:25 in network default
	I0815 01:50:05.688234   75004 main.go:141] libmachine: (kindnet-641488) Ensuring networks are active...
	I0815 01:50:05.688249   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:05.689051   75004 main.go:141] libmachine: (kindnet-641488) Ensuring network default is active
	I0815 01:50:05.689469   75004 main.go:141] libmachine: (kindnet-641488) Ensuring network mk-kindnet-641488 is active
	I0815 01:50:05.690177   75004 main.go:141] libmachine: (kindnet-641488) Getting domain xml...
	I0815 01:50:05.691105   75004 main.go:141] libmachine: (kindnet-641488) Creating domain...
	I0815 01:50:04.140447   74260 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:50:04.140476   74260 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:50:04.140510   74260 buildroot.go:174] setting up certificates
	I0815 01:50:04.140518   74260 provision.go:84] configureAuth start
	I0815 01:50:04.140533   74260 main.go:141] libmachine: (auto-641488) Calling .GetMachineName
	I0815 01:50:04.140815   74260 main.go:141] libmachine: (auto-641488) Calling .GetIP
	I0815 01:50:04.143655   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.144003   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:04.144021   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.144163   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:04.146561   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.146874   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:04.146901   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.146983   74260 provision.go:143] copyHostCerts
	I0815 01:50:04.147052   74260 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:50:04.147074   74260 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:50:04.147147   74260 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:50:04.147267   74260 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:50:04.147277   74260 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:50:04.147311   74260 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:50:04.147466   74260 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:50:04.147491   74260 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:50:04.147583   74260 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:50:04.147688   74260 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.auto-641488 san=[127.0.0.1 192.168.61.21 auto-641488 localhost minikube]
	I0815 01:50:04.454175   74260 provision.go:177] copyRemoteCerts
	I0815 01:50:04.454234   74260 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:50:04.454255   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:04.456948   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.457261   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:04.457301   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.457464   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHPort
	I0815 01:50:04.457656   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:04.457795   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHUsername
	I0815 01:50:04.457969   74260 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/auto-641488/id_rsa Username:docker}
	I0815 01:50:04.543146   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:50:04.566619   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0815 01:50:04.588886   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:50:04.610442   74260 provision.go:87] duration metric: took 469.91139ms to configureAuth
	I0815 01:50:04.610477   74260 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:50:04.610646   74260 config.go:182] Loaded profile config "auto-641488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:50:04.610745   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:04.613579   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.614010   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:04.614039   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.614203   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHPort
	I0815 01:50:04.614423   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:04.614589   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:04.614745   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHUsername
	I0815 01:50:04.614895   74260 main.go:141] libmachine: Using SSH client type: native
	I0815 01:50:04.615092   74260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0815 01:50:04.615109   74260 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:50:04.875330   74260 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:50:04.875367   74260 main.go:141] libmachine: Checking connection to Docker...
	I0815 01:50:04.875374   74260 main.go:141] libmachine: (auto-641488) Calling .GetURL
	I0815 01:50:04.876791   74260 main.go:141] libmachine: (auto-641488) DBG | Using libvirt version 6000000
	I0815 01:50:04.879088   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.879427   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:04.879477   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.879712   74260 main.go:141] libmachine: Docker is up and running!
	I0815 01:50:04.879728   74260 main.go:141] libmachine: Reticulating splines...
	I0815 01:50:04.879736   74260 client.go:171] duration metric: took 26.149607661s to LocalClient.Create
	I0815 01:50:04.879762   74260 start.go:167] duration metric: took 26.149676085s to libmachine.API.Create "auto-641488"
	I0815 01:50:04.879774   74260 start.go:293] postStartSetup for "auto-641488" (driver="kvm2")
	I0815 01:50:04.879785   74260 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:50:04.879807   74260 main.go:141] libmachine: (auto-641488) Calling .DriverName
	I0815 01:50:04.880076   74260 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:50:04.880096   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:04.882271   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.882628   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:04.882655   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:04.882752   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHPort
	I0815 01:50:04.882942   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:04.883137   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHUsername
	I0815 01:50:04.883284   74260 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/auto-641488/id_rsa Username:docker}
	I0815 01:50:04.967508   74260 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:50:04.971451   74260 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:50:04.971475   74260 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:50:04.971546   74260 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:50:04.971661   74260 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:50:04.971779   74260 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:50:04.980758   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:50:05.003129   74260 start.go:296] duration metric: took 123.340422ms for postStartSetup
	I0815 01:50:05.003185   74260 main.go:141] libmachine: (auto-641488) Calling .GetConfigRaw
	I0815 01:50:05.003793   74260 main.go:141] libmachine: (auto-641488) Calling .GetIP
	I0815 01:50:05.006706   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:05.007099   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:05.007126   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:05.007356   74260 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/config.json ...
	I0815 01:50:05.007541   74260 start.go:128] duration metric: took 26.298108177s to createHost
	I0815 01:50:05.007564   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:05.009910   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:05.010220   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:05.010248   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:05.010421   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHPort
	I0815 01:50:05.010565   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:05.010660   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:05.010733   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHUsername
	I0815 01:50:05.010832   74260 main.go:141] libmachine: Using SSH client type: native
	I0815 01:50:05.011000   74260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.21 22 <nil> <nil>}
	I0815 01:50:05.011014   74260 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:50:05.125110   74260 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723686605.095192310
	
	I0815 01:50:05.125131   74260 fix.go:216] guest clock: 1723686605.095192310
	I0815 01:50:05.125139   74260 fix.go:229] Guest: 2024-08-15 01:50:05.09519231 +0000 UTC Remote: 2024-08-15 01:50:05.007553254 +0000 UTC m=+35.944954496 (delta=87.639056ms)
	I0815 01:50:05.125189   74260 fix.go:200] guest clock delta is within tolerance: 87.639056ms
	I0815 01:50:05.125199   74260 start.go:83] releasing machines lock for "auto-641488", held for 26.415957119s
	I0815 01:50:05.125228   74260 main.go:141] libmachine: (auto-641488) Calling .DriverName
	I0815 01:50:05.125505   74260 main.go:141] libmachine: (auto-641488) Calling .GetIP
	I0815 01:50:05.128159   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:05.128597   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:05.128621   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:05.128854   74260 main.go:141] libmachine: (auto-641488) Calling .DriverName
	I0815 01:50:05.129346   74260 main.go:141] libmachine: (auto-641488) Calling .DriverName
	I0815 01:50:05.129549   74260 main.go:141] libmachine: (auto-641488) Calling .DriverName
	I0815 01:50:05.129645   74260 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:50:05.129698   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:05.129751   74260 ssh_runner.go:195] Run: cat /version.json
	I0815 01:50:05.129774   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:05.132446   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:05.132472   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:05.132894   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:05.132923   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:05.132952   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:05.132968   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:05.133072   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHPort
	I0815 01:50:05.133241   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:05.133412   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHUsername
	I0815 01:50:05.133418   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHPort
	I0815 01:50:05.133574   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:05.133582   74260 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/auto-641488/id_rsa Username:docker}
	I0815 01:50:05.133725   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHUsername
	I0815 01:50:05.133833   74260 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/auto-641488/id_rsa Username:docker}
	I0815 01:50:05.217611   74260 ssh_runner.go:195] Run: systemctl --version
	I0815 01:50:05.250952   74260 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:50:05.417047   74260 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:50:05.424632   74260 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:50:05.424724   74260 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:50:05.446419   74260 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:50:05.446444   74260 start.go:495] detecting cgroup driver to use...
	I0815 01:50:05.446498   74260 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:50:05.462737   74260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:50:05.477446   74260 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:50:05.477516   74260 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:50:05.491192   74260 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:50:05.506167   74260 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:50:05.627977   74260 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:50:05.781447   74260 docker.go:233] disabling docker service ...
	I0815 01:50:05.781522   74260 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:50:05.795396   74260 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:50:05.807907   74260 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:50:05.959366   74260 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:50:06.085640   74260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:50:06.098837   74260 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:50:06.116065   74260 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:50:06.116122   74260 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:50:06.125841   74260 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:50:06.125901   74260 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:50:06.136185   74260 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:50:06.146629   74260 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:50:06.157328   74260 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:50:06.168080   74260 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:50:06.178804   74260 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:50:06.196148   74260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:50:06.206086   74260 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:50:06.216952   74260 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:50:06.217000   74260 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:50:06.230988   74260 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:50:06.240456   74260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:50:06.383279   74260 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:50:06.533243   74260 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:50:06.533321   74260 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:50:06.538430   74260 start.go:563] Will wait 60s for crictl version
	I0815 01:50:06.538485   74260 ssh_runner.go:195] Run: which crictl
	I0815 01:50:06.542236   74260 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:50:06.587687   74260 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:50:06.587773   74260 ssh_runner.go:195] Run: crio --version
	I0815 01:50:06.617171   74260 ssh_runner.go:195] Run: crio --version
	I0815 01:50:06.648039   74260 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:50:06.649305   74260 main.go:141] libmachine: (auto-641488) Calling .GetIP
	I0815 01:50:06.654334   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:06.655088   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:06.655117   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:06.655469   74260 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 01:50:06.659527   74260 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:50:06.672367   74260 kubeadm.go:883] updating cluster {Name:auto-641488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:auto-641488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:50:06.672472   74260 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:50:06.672509   74260 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:50:06.707745   74260 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:50:06.707806   74260 ssh_runner.go:195] Run: which lz4
	I0815 01:50:06.711625   74260 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:50:06.716487   74260 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:50:06.716523   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:50:07.969320   74260 crio.go:462] duration metric: took 1.257721142s to copy over tarball
	I0815 01:50:07.969403   74260 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:50:07.024738   75004 main.go:141] libmachine: (kindnet-641488) Waiting to get IP...
	I0815 01:50:07.025735   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:07.026400   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:07.026428   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:07.026397   75070 retry.go:31] will retry after 278.368831ms: waiting for machine to come up
	I0815 01:50:07.306947   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:07.307556   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:07.307579   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:07.307477   75070 retry.go:31] will retry after 338.543898ms: waiting for machine to come up
	I0815 01:50:07.648047   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:07.648617   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:07.648644   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:07.648568   75070 retry.go:31] will retry after 405.24842ms: waiting for machine to come up
	I0815 01:50:08.055179   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:08.055733   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:08.055763   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:08.055688   75070 retry.go:31] will retry after 512.119378ms: waiting for machine to come up
	I0815 01:50:08.569479   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:08.570081   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:08.570104   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:08.570040   75070 retry.go:31] will retry after 601.159246ms: waiting for machine to come up
	I0815 01:50:09.172835   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:09.173355   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:09.173389   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:09.173284   75070 retry.go:31] will retry after 850.892437ms: waiting for machine to come up
	I0815 01:50:10.025533   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:10.026073   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:10.026107   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:10.026009   75070 retry.go:31] will retry after 944.04127ms: waiting for machine to come up
	I0815 01:50:10.273256   74260 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.303819151s)
	I0815 01:50:10.273287   74260 crio.go:469] duration metric: took 2.303931838s to extract the tarball
	I0815 01:50:10.273296   74260 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:50:10.310151   74260 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:50:10.352721   74260 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:50:10.352744   74260 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:50:10.352751   74260 kubeadm.go:934] updating node { 192.168.61.21 8443 v1.31.0 crio true true} ...
	I0815 01:50:10.352859   74260 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-641488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:auto-641488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:50:10.352921   74260 ssh_runner.go:195] Run: crio config
	I0815 01:50:10.423209   74260 cni.go:84] Creating CNI manager for ""
	I0815 01:50:10.423233   74260 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:50:10.423245   74260 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:50:10.423276   74260 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.21 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-641488 NodeName:auto-641488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:50:10.423464   74260 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-641488"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:50:10.423539   74260 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:50:10.435869   74260 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:50:10.435930   74260 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:50:10.445358   74260 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0815 01:50:10.461329   74260 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:50:10.477960   74260 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0815 01:50:10.496542   74260 ssh_runner.go:195] Run: grep 192.168.61.21	control-plane.minikube.internal$ /etc/hosts
	I0815 01:50:10.500445   74260 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:50:10.513087   74260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:50:10.630950   74260 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:50:10.651819   74260 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488 for IP: 192.168.61.21
	I0815 01:50:10.651844   74260 certs.go:194] generating shared ca certs ...
	I0815 01:50:10.651866   74260 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:50:10.652022   74260 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:50:10.652080   74260 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:50:10.652099   74260 certs.go:256] generating profile certs ...
	I0815 01:50:10.652156   74260 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/client.key
	I0815 01:50:10.652173   74260 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/client.crt with IP's: []
	I0815 01:50:10.771931   74260 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/client.crt ...
	I0815 01:50:10.771958   74260 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/client.crt: {Name:mk942cac8bb382fed50db06f531db1cacfc20224 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:50:10.772133   74260 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/client.key ...
	I0815 01:50:10.772144   74260 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/client.key: {Name:mk851fcfbff8e6800489f17bc6fe52ea087eaf6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:50:10.772219   74260 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/apiserver.key.ee36bb3f
	I0815 01:50:10.772234   74260 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/apiserver.crt.ee36bb3f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.21]
	I0815 01:50:11.053122   74260 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/apiserver.crt.ee36bb3f ...
	I0815 01:50:11.053157   74260 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/apiserver.crt.ee36bb3f: {Name:mk07788f3569ec9ee0a8a1a6f4234155e0a8d0ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:50:11.053354   74260 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/apiserver.key.ee36bb3f ...
	I0815 01:50:11.053370   74260 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/apiserver.key.ee36bb3f: {Name:mkea284d11fca50aa1109b29d280584fba6f9cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:50:11.053483   74260 certs.go:381] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/apiserver.crt.ee36bb3f -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/apiserver.crt
	I0815 01:50:11.053599   74260 certs.go:385] copying /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/apiserver.key.ee36bb3f -> /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/apiserver.key
	I0815 01:50:11.053687   74260 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/proxy-client.key
	I0815 01:50:11.053708   74260 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/proxy-client.crt with IP's: []
	I0815 01:50:11.176905   74260 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/proxy-client.crt ...
	I0815 01:50:11.176936   74260 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/proxy-client.crt: {Name:mke86cd8ca8d884960973ceb89a7c289a901e860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:50:11.177119   74260 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/proxy-client.key ...
	I0815 01:50:11.177133   74260 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/proxy-client.key: {Name:mk9ea43e6050ccd4fb1621ff0e6e81c7c99ce410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:50:11.177338   74260 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:50:11.177382   74260 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:50:11.177393   74260 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:50:11.177425   74260 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:50:11.177457   74260 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:50:11.177488   74260 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:50:11.177545   74260 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:50:11.178135   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:50:11.201426   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:50:11.225550   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:50:11.250169   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:50:11.274373   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0815 01:50:11.296409   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:50:11.318280   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:50:11.344286   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/auto-641488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:50:11.377773   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:50:11.405736   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:50:11.427467   74260 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:50:11.449915   74260 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:50:11.472326   74260 ssh_runner.go:195] Run: openssl version
	I0815 01:50:11.478371   74260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:50:11.490265   74260 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:50:11.494835   74260 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:50:11.494911   74260 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:50:11.500501   74260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:50:11.511407   74260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:50:11.522326   74260 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:50:11.527826   74260 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:50:11.527874   74260 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:50:11.533435   74260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:50:11.543972   74260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:50:11.554694   74260 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:50:11.559433   74260 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:50:11.559487   74260 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:50:11.565111   74260 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:50:11.575466   74260 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:50:11.579732   74260 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 01:50:11.579805   74260 kubeadm.go:392] StartCluster: {Name:auto-641488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:auto-641488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:50:11.579893   74260 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:50:11.579960   74260 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:50:11.616262   74260 cri.go:89] found id: ""
	I0815 01:50:11.616340   74260 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:50:11.626764   74260 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:50:11.636580   74260 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:50:11.646494   74260 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:50:11.646511   74260 kubeadm.go:157] found existing configuration files:
	
	I0815 01:50:11.646552   74260 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:50:11.655920   74260 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:50:11.655975   74260 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:50:11.665305   74260 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:50:11.673792   74260 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:50:11.673840   74260 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:50:11.682551   74260 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:50:11.691666   74260 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:50:11.691727   74260 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:50:11.700669   74260 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:50:11.709900   74260 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:50:11.709952   74260 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:50:11.719041   74260 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:50:11.776215   74260 kubeadm.go:310] W0815 01:50:11.750704     841 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:50:11.777206   74260 kubeadm.go:310] W0815 01:50:11.751830     841 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:50:11.882388   74260 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:50:10.971586   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:10.972031   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:10.972059   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:10.971985   75070 retry.go:31] will retry after 1.376435886s: waiting for machine to come up
	I0815 01:50:12.350054   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:12.350582   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:12.350610   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:12.350530   75070 retry.go:31] will retry after 1.491328169s: waiting for machine to come up
	I0815 01:50:13.844011   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:13.844491   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:13.844509   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:13.844460   75070 retry.go:31] will retry after 1.755016463s: waiting for machine to come up
	I0815 01:50:15.600615   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:15.601072   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:15.601102   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:15.601020   75070 retry.go:31] will retry after 1.884023463s: waiting for machine to come up
	I0815 01:50:17.487256   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:17.487715   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:17.487766   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:17.487670   75070 retry.go:31] will retry after 2.858225868s: waiting for machine to come up
	I0815 01:50:20.347208   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:20.347730   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:20.347758   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:20.347679   75070 retry.go:31] will retry after 3.418158706s: waiting for machine to come up
	I0815 01:50:22.051480   74260 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:50:22.051551   74260 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:50:22.051643   74260 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:50:22.051745   74260 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:50:22.051856   74260 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:50:22.051973   74260 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:50:22.053692   74260 out.go:204]   - Generating certificates and keys ...
	I0815 01:50:22.053795   74260 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:50:22.053893   74260 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:50:22.053985   74260 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 01:50:22.054072   74260 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 01:50:22.054134   74260 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 01:50:22.054181   74260 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 01:50:22.054246   74260 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 01:50:22.054375   74260 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-641488 localhost] and IPs [192.168.61.21 127.0.0.1 ::1]
	I0815 01:50:22.054434   74260 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 01:50:22.054536   74260 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-641488 localhost] and IPs [192.168.61.21 127.0.0.1 ::1]
	I0815 01:50:22.054598   74260 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 01:50:22.054657   74260 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 01:50:22.054696   74260 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 01:50:22.054761   74260 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:50:22.054808   74260 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:50:22.054859   74260 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:50:22.054929   74260 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:50:22.054990   74260 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:50:22.055048   74260 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:50:22.055125   74260 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:50:22.055190   74260 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:50:22.056541   74260 out.go:204]   - Booting up control plane ...
	I0815 01:50:22.056631   74260 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:50:22.056736   74260 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:50:22.056802   74260 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:50:22.056904   74260 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:50:22.057055   74260 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:50:22.057095   74260 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:50:22.057200   74260 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:50:22.057309   74260 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:50:22.057369   74260 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.008539857s
	I0815 01:50:22.057449   74260 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:50:22.057542   74260 kubeadm.go:310] [api-check] The API server is healthy after 4.501706235s
	I0815 01:50:22.057644   74260 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:50:22.057774   74260 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:50:22.057823   74260 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:50:22.058033   74260 kubeadm.go:310] [mark-control-plane] Marking the node auto-641488 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:50:22.058093   74260 kubeadm.go:310] [bootstrap-token] Using token: iwjix8.03i2famdx71c6lnz
	I0815 01:50:22.059503   74260 out.go:204]   - Configuring RBAC rules ...
	I0815 01:50:22.059634   74260 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:50:22.059727   74260 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:50:22.059894   74260 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:50:22.060037   74260 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:50:22.060171   74260 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:50:22.060278   74260 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:50:22.060402   74260 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:50:22.060465   74260 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:50:22.060531   74260 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:50:22.060540   74260 kubeadm.go:310] 
	I0815 01:50:22.060628   74260 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:50:22.060637   74260 kubeadm.go:310] 
	I0815 01:50:22.060760   74260 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:50:22.060769   74260 kubeadm.go:310] 
	I0815 01:50:22.060802   74260 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:50:22.060884   74260 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:50:22.060955   74260 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:50:22.060965   74260 kubeadm.go:310] 
	I0815 01:50:22.061055   74260 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:50:22.061068   74260 kubeadm.go:310] 
	I0815 01:50:22.061137   74260 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:50:22.061148   74260 kubeadm.go:310] 
	I0815 01:50:22.061223   74260 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:50:22.061318   74260 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:50:22.061390   74260 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:50:22.061401   74260 kubeadm.go:310] 
	I0815 01:50:22.061469   74260 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:50:22.061536   74260 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:50:22.061545   74260 kubeadm.go:310] 
	I0815 01:50:22.061628   74260 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iwjix8.03i2famdx71c6lnz \
	I0815 01:50:22.061753   74260 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:50:22.061783   74260 kubeadm.go:310] 	--control-plane 
	I0815 01:50:22.061793   74260 kubeadm.go:310] 
	I0815 01:50:22.061893   74260 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:50:22.061905   74260 kubeadm.go:310] 
	I0815 01:50:22.062006   74260 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iwjix8.03i2famdx71c6lnz \
	I0815 01:50:22.062141   74260 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:50:22.062154   74260 cni.go:84] Creating CNI manager for ""
	I0815 01:50:22.062164   74260 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:50:22.063684   74260 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:50:22.064972   74260 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:50:22.078485   74260 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:50:22.095856   74260 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:50:22.095933   74260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:50:22.095984   74260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-641488 minikube.k8s.io/updated_at=2024_08_15T01_50_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=auto-641488 minikube.k8s.io/primary=true
	I0815 01:50:22.135555   74260 ops.go:34] apiserver oom_adj: -16
	I0815 01:50:22.239975   74260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:50:22.740160   74260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:50:23.240714   74260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:50:23.740509   74260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:50:23.769715   75004 main.go:141] libmachine: (kindnet-641488) DBG | domain kindnet-641488 has defined MAC address 52:54:00:ce:8e:7b in network mk-kindnet-641488
	I0815 01:50:23.770084   75004 main.go:141] libmachine: (kindnet-641488) DBG | unable to find current IP address of domain kindnet-641488 in network mk-kindnet-641488
	I0815 01:50:23.770107   75004 main.go:141] libmachine: (kindnet-641488) DBG | I0815 01:50:23.770043   75070 retry.go:31] will retry after 5.045275324s: waiting for machine to come up
	I0815 01:50:24.240186   74260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:50:24.740121   74260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:50:25.240774   74260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:50:25.740753   74260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:50:26.240282   74260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:50:26.740376   74260 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:50:26.849382   74260 kubeadm.go:1113] duration metric: took 4.753508694s to wait for elevateKubeSystemPrivileges
	I0815 01:50:26.849428   74260 kubeadm.go:394] duration metric: took 15.26962489s to StartCluster
	I0815 01:50:26.849452   74260 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:50:26.849539   74260 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:50:26.851430   74260 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:50:26.851642   74260 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 01:50:26.851658   74260 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:50:26.851699   74260 addons.go:69] Setting storage-provisioner=true in profile "auto-641488"
	I0815 01:50:26.851721   74260 addons.go:234] Setting addon storage-provisioner=true in "auto-641488"
	I0815 01:50:26.851736   74260 addons.go:69] Setting default-storageclass=true in profile "auto-641488"
	I0815 01:50:26.851643   74260 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.21 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:50:26.851747   74260 host.go:66] Checking if "auto-641488" exists ...
	I0815 01:50:26.851790   74260 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-641488"
	I0815 01:50:26.851858   74260 config.go:182] Loaded profile config "auto-641488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:50:26.852284   74260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:50:26.852290   74260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:50:26.852315   74260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:50:26.852316   74260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:50:26.853109   74260 out.go:177] * Verifying Kubernetes components...
	I0815 01:50:26.854158   74260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:50:26.867146   74260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0815 01:50:26.867345   74260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0815 01:50:26.867573   74260 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:50:26.867775   74260 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:50:26.868068   74260 main.go:141] libmachine: Using API Version  1
	I0815 01:50:26.868092   74260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:50:26.868193   74260 main.go:141] libmachine: Using API Version  1
	I0815 01:50:26.868213   74260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:50:26.868423   74260 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:50:26.868553   74260 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:50:26.868633   74260 main.go:141] libmachine: (auto-641488) Calling .GetState
	I0815 01:50:26.869116   74260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:50:26.869147   74260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:50:26.871754   74260 addons.go:234] Setting addon default-storageclass=true in "auto-641488"
	I0815 01:50:26.871786   74260 host.go:66] Checking if "auto-641488" exists ...
	I0815 01:50:26.872046   74260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:50:26.872069   74260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:50:26.885759   74260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0815 01:50:26.886295   74260 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:50:26.886816   74260 main.go:141] libmachine: Using API Version  1
	I0815 01:50:26.886844   74260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:50:26.887184   74260 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:50:26.887395   74260 main.go:141] libmachine: (auto-641488) Calling .GetState
	I0815 01:50:26.887434   74260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I0815 01:50:26.887834   74260 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:50:26.888391   74260 main.go:141] libmachine: Using API Version  1
	I0815 01:50:26.888407   74260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:50:26.888846   74260 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:50:26.889244   74260 main.go:141] libmachine: (auto-641488) Calling .DriverName
	I0815 01:50:26.889414   74260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:50:26.889442   74260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:50:26.890985   74260 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:50:26.892210   74260 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:50:26.892233   74260 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:50:26.892253   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:26.895742   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:26.896190   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:26.896213   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:26.896400   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHPort
	I0815 01:50:26.896600   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:26.896779   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHUsername
	I0815 01:50:26.896931   74260 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/auto-641488/id_rsa Username:docker}
	I0815 01:50:26.906571   74260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I0815 01:50:26.906934   74260 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:50:26.907523   74260 main.go:141] libmachine: Using API Version  1
	I0815 01:50:26.907540   74260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:50:26.907888   74260 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:50:26.908092   74260 main.go:141] libmachine: (auto-641488) Calling .GetState
	I0815 01:50:26.909676   74260 main.go:141] libmachine: (auto-641488) Calling .DriverName
	I0815 01:50:26.909961   74260 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:50:26.909979   74260 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:50:26.910001   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHHostname
	I0815 01:50:26.912809   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:26.913286   74260 main.go:141] libmachine: (auto-641488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:0e:e4", ip: ""} in network mk-auto-641488: {Iface:virbr3 ExpiryTime:2024-08-15 02:49:53 +0000 UTC Type:0 Mac:52:54:00:c7:0e:e4 Iaid: IPaddr:192.168.61.21 Prefix:24 Hostname:auto-641488 Clientid:01:52:54:00:c7:0e:e4}
	I0815 01:50:26.913305   74260 main.go:141] libmachine: (auto-641488) DBG | domain auto-641488 has defined IP address 192.168.61.21 and MAC address 52:54:00:c7:0e:e4 in network mk-auto-641488
	I0815 01:50:26.913624   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHPort
	I0815 01:50:26.913852   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHKeyPath
	I0815 01:50:26.914022   74260 main.go:141] libmachine: (auto-641488) Calling .GetSSHUsername
	I0815 01:50:26.914196   74260 sshutil.go:53] new ssh client: &{IP:192.168.61.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/auto-641488/id_rsa Username:docker}
	I0815 01:50:27.051907   74260 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 01:50:27.086419   74260 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:50:27.235815   74260 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:50:27.344180   74260 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:50:27.724758   74260 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0815 01:50:27.725959   74260 node_ready.go:35] waiting up to 15m0s for node "auto-641488" to be "Ready" ...
	I0815 01:50:27.740708   74260 node_ready.go:49] node "auto-641488" has status "Ready":"True"
	I0815 01:50:27.740735   74260 node_ready.go:38] duration metric: took 14.727926ms for node "auto-641488" to be "Ready" ...
	I0815 01:50:27.740752   74260 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:50:27.756216   74260 pod_ready.go:78] waiting up to 15m0s for pod "coredns-6f6b679f8f-625hg" in "kube-system" namespace to be "Ready" ...
	I0815 01:50:28.199548   74260 main.go:141] libmachine: Making call to close driver server
	I0815 01:50:28.199579   74260 main.go:141] libmachine: (auto-641488) Calling .Close
	I0815 01:50:28.199578   74260 main.go:141] libmachine: Making call to close driver server
	I0815 01:50:28.199600   74260 main.go:141] libmachine: (auto-641488) Calling .Close
	I0815 01:50:28.199931   74260 main.go:141] libmachine: (auto-641488) DBG | Closing plugin on server side
	I0815 01:50:28.199931   74260 main.go:141] libmachine: (auto-641488) DBG | Closing plugin on server side
	I0815 01:50:28.199969   74260 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:50:28.199971   74260 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:50:28.199988   74260 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:50:28.199982   74260 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:50:28.200003   74260 main.go:141] libmachine: Making call to close driver server
	I0815 01:50:28.200006   74260 main.go:141] libmachine: Making call to close driver server
	I0815 01:50:28.200011   74260 main.go:141] libmachine: (auto-641488) Calling .Close
	I0815 01:50:28.200015   74260 main.go:141] libmachine: (auto-641488) Calling .Close
	I0815 01:50:28.200268   74260 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:50:28.200284   74260 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:50:28.200296   74260 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:50:28.200296   74260 main.go:141] libmachine: (auto-641488) DBG | Closing plugin on server side
	I0815 01:50:28.200306   74260 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:50:28.213539   74260 main.go:141] libmachine: Making call to close driver server
	I0815 01:50:28.213570   74260 main.go:141] libmachine: (auto-641488) Calling .Close
	I0815 01:50:28.213853   74260 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:50:28.213870   74260 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:50:28.215422   74260 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0815 01:50:28.216480   74260 addons.go:510] duration metric: took 1.364818354s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0815 01:50:28.230312   74260 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-641488" context rescaled to 1 replicas
	
	
	==> CRI-O <==
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.519779696Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686629519756957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fe958db-da11-461b-bb20-26b78c3dd4a8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.520246937Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a57b4cf3-deb2-40a9-8b04-8ea18a6924b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.520309805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a57b4cf3-deb2-40a9-8b04-8ea18a6924b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.520509301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e19c80b54c6a0fd2f130825b9928566ec4fd02360f7e7ceb57baebfb1f9ecde,PodSandboxId:a4abbdaa7b4a0c842e57c82be8d4503fc493bce96faddb763843ba0bf9a357b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685651559623525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002656ed-b542-442d-9409-6f0b5cf557dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fb1c9d0ba32a174f8f16cbccccf67d7e40194387549b313dae172f2965ac24,PodSandboxId:d7842b9af2fc81c4cfd86863df726dd516c3a286d55de4b81bcc97c75b0ef314,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650875749000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kmmdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455019d9-07b5-418e-8668-26272424e96c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b2f2efc9842fc0d074aa5a2e643a0cc59b68f537e1d0edbee2d0002071469b,PodSandboxId:ef1cacc079024898b663785ed45bd67e3d403f843ba28e723bac34ecb06c1e55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650521129931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kx2xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
1e26858-a527-4f0d-a7fd-e5c3f82b29bc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fff97fba4f249d22ae559a3fe50e7b931e5c20404aaacbfc8a4ab2e147a813,PodSandboxId:7f51d493f991485a3a98e86d3318f6783185603ccb5420601701585a40ba4663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723685650232800684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7hfvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cc0a0d4ab8d0c4b6af0fba77cc19d18df1c7fa7512f15ed521c1dae749f1d5,PodSandboxId:e444cfa8d96893666e4d07795897e4f03dd209e3a155ff5c980d4b8dac072da1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685639098491745,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa694d4a407ca969c7c1a2b66f6084ee,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24478db2154093a3701e841c9781ce568f8451ca53aff1b1899a7ca2187aa73b,PodSandboxId:c8654873f01a7bdad8806c986f3bbfa3e89834113498f8a6a655d6a1fedd3dc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685639068548206,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f22f388fc823ef71b4e262d5d4490a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4506ce769245994e30842e485ac09f3de96303c68d5c1beaef90f8b8a35946,PodSandboxId:ef013eee580a23f2cb9ca6894d5744fa94096aa9045a555a4fcd71919b5e7243,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685639060416310,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa99cc6c43fc2f9a4455c9f2ed3323fea6bd332c4e85ee9fe56851a182d64b7,PodSandboxId:2c4b28379543a196b736544f05a44b70db699874afd9347ace82ae5157c8e4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685639013837650,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd60e54cffa9111f02db87b2ecb87f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293849baffb776e957f241f40b637fb7c4a81bf2aa9f5f1e804a2cef6a368813,PodSandboxId:e52b405d973349a960d80fff1f8cefe84e9ef89bea9f1bc3b7e2f5f6f8d2c7bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685355276954673,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a57b4cf3-deb2-40a9-8b04-8ea18a6924b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.558936669Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=890f8146-83a6-4954-b0bf-17367d754e5c name=/runtime.v1.RuntimeService/Version
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.559024733Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=890f8146-83a6-4954-b0bf-17367d754e5c name=/runtime.v1.RuntimeService/Version
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.561012759Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bc588ce-7e39-477a-b41b-f3723586df86 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.561494804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686629561468408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bc588ce-7e39-477a-b41b-f3723586df86 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.562051866Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac076f10-43de-422a-b690-6365f33df685 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.562117868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac076f10-43de-422a-b690-6365f33df685 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.562365895Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e19c80b54c6a0fd2f130825b9928566ec4fd02360f7e7ceb57baebfb1f9ecde,PodSandboxId:a4abbdaa7b4a0c842e57c82be8d4503fc493bce96faddb763843ba0bf9a357b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685651559623525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002656ed-b542-442d-9409-6f0b5cf557dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fb1c9d0ba32a174f8f16cbccccf67d7e40194387549b313dae172f2965ac24,PodSandboxId:d7842b9af2fc81c4cfd86863df726dd516c3a286d55de4b81bcc97c75b0ef314,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650875749000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kmmdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455019d9-07b5-418e-8668-26272424e96c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b2f2efc9842fc0d074aa5a2e643a0cc59b68f537e1d0edbee2d0002071469b,PodSandboxId:ef1cacc079024898b663785ed45bd67e3d403f843ba28e723bac34ecb06c1e55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650521129931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kx2xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
1e26858-a527-4f0d-a7fd-e5c3f82b29bc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fff97fba4f249d22ae559a3fe50e7b931e5c20404aaacbfc8a4ab2e147a813,PodSandboxId:7f51d493f991485a3a98e86d3318f6783185603ccb5420601701585a40ba4663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723685650232800684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7hfvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cc0a0d4ab8d0c4b6af0fba77cc19d18df1c7fa7512f15ed521c1dae749f1d5,PodSandboxId:e444cfa8d96893666e4d07795897e4f03dd209e3a155ff5c980d4b8dac072da1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685639098491745,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa694d4a407ca969c7c1a2b66f6084ee,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24478db2154093a3701e841c9781ce568f8451ca53aff1b1899a7ca2187aa73b,PodSandboxId:c8654873f01a7bdad8806c986f3bbfa3e89834113498f8a6a655d6a1fedd3dc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685639068548206,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f22f388fc823ef71b4e262d5d4490a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4506ce769245994e30842e485ac09f3de96303c68d5c1beaef90f8b8a35946,PodSandboxId:ef013eee580a23f2cb9ca6894d5744fa94096aa9045a555a4fcd71919b5e7243,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685639060416310,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa99cc6c43fc2f9a4455c9f2ed3323fea6bd332c4e85ee9fe56851a182d64b7,PodSandboxId:2c4b28379543a196b736544f05a44b70db699874afd9347ace82ae5157c8e4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685639013837650,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd60e54cffa9111f02db87b2ecb87f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293849baffb776e957f241f40b637fb7c4a81bf2aa9f5f1e804a2cef6a368813,PodSandboxId:e52b405d973349a960d80fff1f8cefe84e9ef89bea9f1bc3b7e2f5f6f8d2c7bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685355276954673,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac076f10-43de-422a-b690-6365f33df685 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.599320278Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f685af0-e9b0-4d0f-ae14-bca8733aebf7 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.599411455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f685af0-e9b0-4d0f-ae14-bca8733aebf7 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.600431694Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7963f7ea-f4ff-4f6e-a1f5-64321ebc94e3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.600871968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686629600849865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7963f7ea-f4ff-4f6e-a1f5-64321ebc94e3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.601638978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e5b6f15-2baa-4673-b0c0-c929e89c0c0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.601730164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e5b6f15-2baa-4673-b0c0-c929e89c0c0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.601970269Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e19c80b54c6a0fd2f130825b9928566ec4fd02360f7e7ceb57baebfb1f9ecde,PodSandboxId:a4abbdaa7b4a0c842e57c82be8d4503fc493bce96faddb763843ba0bf9a357b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685651559623525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002656ed-b542-442d-9409-6f0b5cf557dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fb1c9d0ba32a174f8f16cbccccf67d7e40194387549b313dae172f2965ac24,PodSandboxId:d7842b9af2fc81c4cfd86863df726dd516c3a286d55de4b81bcc97c75b0ef314,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650875749000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kmmdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455019d9-07b5-418e-8668-26272424e96c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b2f2efc9842fc0d074aa5a2e643a0cc59b68f537e1d0edbee2d0002071469b,PodSandboxId:ef1cacc079024898b663785ed45bd67e3d403f843ba28e723bac34ecb06c1e55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650521129931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kx2xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
1e26858-a527-4f0d-a7fd-e5c3f82b29bc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fff97fba4f249d22ae559a3fe50e7b931e5c20404aaacbfc8a4ab2e147a813,PodSandboxId:7f51d493f991485a3a98e86d3318f6783185603ccb5420601701585a40ba4663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723685650232800684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7hfvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cc0a0d4ab8d0c4b6af0fba77cc19d18df1c7fa7512f15ed521c1dae749f1d5,PodSandboxId:e444cfa8d96893666e4d07795897e4f03dd209e3a155ff5c980d4b8dac072da1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685639098491745,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa694d4a407ca969c7c1a2b66f6084ee,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24478db2154093a3701e841c9781ce568f8451ca53aff1b1899a7ca2187aa73b,PodSandboxId:c8654873f01a7bdad8806c986f3bbfa3e89834113498f8a6a655d6a1fedd3dc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685639068548206,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f22f388fc823ef71b4e262d5d4490a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4506ce769245994e30842e485ac09f3de96303c68d5c1beaef90f8b8a35946,PodSandboxId:ef013eee580a23f2cb9ca6894d5744fa94096aa9045a555a4fcd71919b5e7243,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685639060416310,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa99cc6c43fc2f9a4455c9f2ed3323fea6bd332c4e85ee9fe56851a182d64b7,PodSandboxId:2c4b28379543a196b736544f05a44b70db699874afd9347ace82ae5157c8e4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685639013837650,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd60e54cffa9111f02db87b2ecb87f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293849baffb776e957f241f40b637fb7c4a81bf2aa9f5f1e804a2cef6a368813,PodSandboxId:e52b405d973349a960d80fff1f8cefe84e9ef89bea9f1bc3b7e2f5f6f8d2c7bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685355276954673,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e5b6f15-2baa-4673-b0c0-c929e89c0c0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.635878376Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b72ee162-1f42-4827-a075-14ab0f94dfe1 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.635961001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b72ee162-1f42-4827-a075-14ab0f94dfe1 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.636964717Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d3df16b-cd12-4170-8e84-df9310f8e885 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.637521365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686629637493454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d3df16b-cd12-4170-8e84-df9310f8e885 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.638322576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e2a41cd-0156-4947-aaf4-d8939daba13c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.638388284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e2a41cd-0156-4947-aaf4-d8939daba13c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:50:29 embed-certs-190398 crio[720]: time="2024-08-15 01:50:29.638599478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e19c80b54c6a0fd2f130825b9928566ec4fd02360f7e7ceb57baebfb1f9ecde,PodSandboxId:a4abbdaa7b4a0c842e57c82be8d4503fc493bce96faddb763843ba0bf9a357b5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685651559623525,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 002656ed-b542-442d-9409-6f0b5cf557dc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5fb1c9d0ba32a174f8f16cbccccf67d7e40194387549b313dae172f2965ac24,PodSandboxId:d7842b9af2fc81c4cfd86863df726dd516c3a286d55de4b81bcc97c75b0ef314,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650875749000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kmmdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455019d9-07b5-418e-8668-26272424e96c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b2f2efc9842fc0d074aa5a2e643a0cc59b68f537e1d0edbee2d0002071469b,PodSandboxId:ef1cacc079024898b663785ed45bd67e3d403f843ba28e723bac34ecb06c1e55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685650521129931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kx2xv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
1e26858-a527-4f0d-a7fd-e5c3f82b29bc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fff97fba4f249d22ae559a3fe50e7b931e5c20404aaacbfc8a4ab2e147a813,PodSandboxId:7f51d493f991485a3a98e86d3318f6783185603ccb5420601701585a40ba4663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723685650232800684,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7hfvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cc0a0d4ab8d0c4b6af0fba77cc19d18df1c7fa7512f15ed521c1dae749f1d5,PodSandboxId:e444cfa8d96893666e4d07795897e4f03dd209e3a155ff5c980d4b8dac072da1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685639098491745,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa694d4a407ca969c7c1a2b66f6084ee,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24478db2154093a3701e841c9781ce568f8451ca53aff1b1899a7ca2187aa73b,PodSandboxId:c8654873f01a7bdad8806c986f3bbfa3e89834113498f8a6a655d6a1fedd3dc1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685639068548206,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f22f388fc823ef71b4e262d5d4490a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb4506ce769245994e30842e485ac09f3de96303c68d5c1beaef90f8b8a35946,PodSandboxId:ef013eee580a23f2cb9ca6894d5744fa94096aa9045a555a4fcd71919b5e7243,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685639060416310,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa99cc6c43fc2f9a4455c9f2ed3323fea6bd332c4e85ee9fe56851a182d64b7,PodSandboxId:2c4b28379543a196b736544f05a44b70db699874afd9347ace82ae5157c8e4ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685639013837650,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cd60e54cffa9111f02db87b2ecb87f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293849baffb776e957f241f40b637fb7c4a81bf2aa9f5f1e804a2cef6a368813,PodSandboxId:e52b405d973349a960d80fff1f8cefe84e9ef89bea9f1bc3b7e2f5f6f8d2c7bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685355276954673,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-190398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f267513294d8711c1e8d2d912d1d20a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e2a41cd-0156-4947-aaf4-d8939daba13c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e19c80b54c6a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   a4abbdaa7b4a0       storage-provisioner
	d5fb1c9d0ba32       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   d7842b9af2fc8       coredns-6f6b679f8f-kmmdc
	f1b2f2efc9842       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   ef1cacc079024       coredns-6f6b679f8f-kx2xv
	31fff97fba4f2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   16 minutes ago      Running             kube-proxy                0                   7f51d493f9914       kube-proxy-7hfvr
	18cc0a0d4ab8d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   16 minutes ago      Running             kube-scheduler            2                   e444cfa8d9689       kube-scheduler-embed-certs-190398
	24478db215409       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   c8654873f01a7       etcd-embed-certs-190398
	fb4506ce76924       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   16 minutes ago      Running             kube-apiserver            2                   ef013eee580a2       kube-apiserver-embed-certs-190398
	1aa99cc6c43fc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   16 minutes ago      Running             kube-controller-manager   2                   2c4b28379543a       kube-controller-manager-embed-certs-190398
	293849baffb77       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 minutes ago      Exited              kube-apiserver            1                   e52b405d97334       kube-apiserver-embed-certs-190398
	
	
	==> coredns [d5fb1c9d0ba32a174f8f16cbccccf67d7e40194387549b313dae172f2965ac24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f1b2f2efc9842fc0d074aa5a2e643a0cc59b68f537e1d0edbee2d0002071469b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-190398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-190398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=embed-certs-190398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T01_34_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 01:34:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-190398
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:50:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 01:49:32 +0000   Thu, 15 Aug 2024 01:33:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 01:49:32 +0000   Thu, 15 Aug 2024 01:33:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 01:49:32 +0000   Thu, 15 Aug 2024 01:33:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 01:49:32 +0000   Thu, 15 Aug 2024 01:34:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.151
	  Hostname:    embed-certs-190398
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8eb300ebe3644369a5de316135d838a7
	  System UUID:                8eb300eb-e364-4369-a5de-316135d838a7
	  Boot ID:                    98d434e5-9be9-4d3f-841e-aeb76a80c23a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-kmmdc                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-6f6b679f8f-kx2xv                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-embed-certs-190398                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-190398             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-190398    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-7hfvr                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-190398             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-6867b74b74-4ldv7               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-190398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-190398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-190398 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-190398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-190398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-190398 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-190398 event: Registered Node embed-certs-190398 in Controller
	
	
	==> dmesg <==
	[  +0.058561] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037758] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.884646] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.832383] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.537949] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug15 01:29] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.054020] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068556] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.182072] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.139307] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.306934] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[  +4.030895] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +2.063977] systemd-fstab-generator[924]: Ignoring "noauto" option for root device
	[  +0.059725] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.532942] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.449354] kauditd_printk_skb: 85 callbacks suppressed
	[Aug15 01:33] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.559129] systemd-fstab-generator[2578]: Ignoring "noauto" option for root device
	[Aug15 01:34] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.647430] systemd-fstab-generator[2900]: Ignoring "noauto" option for root device
	[  +5.370814] systemd-fstab-generator[3012]: Ignoring "noauto" option for root device
	[  +0.090609] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.458787] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [24478db2154093a3701e841c9781ce568f8451ca53aff1b1899a7ca2187aa73b] <==
	{"level":"info","ts":"2024-08-15T01:33:59.549570Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:33:59.549846Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:33:59.552665Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:33:59.557614Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T01:33:59.560441Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31c137043c99215d","local-member-id":"cec33aa8f0724833","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:33:59.560633Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:33:59.562268Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:33:59.563444Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:33:59.565833Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T01:33:59.569241Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T01:33:59.570129Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.151:2379"}
	{"level":"info","ts":"2024-08-15T01:43:59.945967Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":721}
	{"level":"info","ts":"2024-08-15T01:43:59.956137Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":721,"took":"9.772155ms","hash":3019690798,"current-db-size-bytes":2330624,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2330624,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-08-15T01:43:59.956245Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3019690798,"revision":721,"compact-revision":-1}
	{"level":"info","ts":"2024-08-15T01:48:52.278532Z","caller":"traceutil/trace.go:171","msg":"trace[71348204] transaction","detail":"{read_only:false; response_revision:1201; number_of_response:1; }","duration":"250.359988ms","start":"2024-08-15T01:48:52.028141Z","end":"2024-08-15T01:48:52.278501Z","steps":["trace[71348204] 'process raft request'  (duration: 250.141186ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T01:48:52.546291Z","caller":"traceutil/trace.go:171","msg":"trace[1312853278] transaction","detail":"{read_only:false; response_revision:1202; number_of_response:1; }","duration":"101.777881ms","start":"2024-08-15T01:48:52.444495Z","end":"2024-08-15T01:48:52.546273Z","steps":["trace[1312853278] 'process raft request'  (duration: 101.617531ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:48:52.813533Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.715889ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T01:48:52.813635Z","caller":"traceutil/trace.go:171","msg":"trace[250869956] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1202; }","duration":"125.932734ms","start":"2024-08-15T01:48:52.687685Z","end":"2024-08-15T01:48:52.813617Z","steps":["trace[250869956] 'range keys from in-memory index tree'  (duration: 125.69455ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T01:48:59.954566Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2024-08-15T01:48:59.959299Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":964,"took":"4.385587ms","hash":1410707759,"current-db-size-bytes":2330624,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-15T01:48:59.959351Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1410707759,"revision":964,"compact-revision":721}
	{"level":"warn","ts":"2024-08-15T01:49:46.869544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.909439ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T01:49:46.869628Z","caller":"traceutil/trace.go:171","msg":"trace[323985808] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1248; }","duration":"182.017255ms","start":"2024-08-15T01:49:46.687595Z","end":"2024-08-15T01:49:46.869612Z","steps":["trace[323985808] 'range keys from in-memory index tree'  (duration: 181.853662ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T01:50:13.795499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.959013ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T01:50:13.795639Z","caller":"traceutil/trace.go:171","msg":"trace[1891718138] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1269; }","duration":"108.176679ms","start":"2024-08-15T01:50:13.687442Z","end":"2024-08-15T01:50:13.795619Z","steps":["trace[1891718138] 'range keys from in-memory index tree'  (duration: 107.835895ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:50:29 up 21 min,  0 users,  load average: 0.08, 0.13, 0.12
	Linux embed-certs-190398 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [293849baffb776e957f241f40b637fb7c4a81bf2aa9f5f1e804a2cef6a368813] <==
	W0815 01:33:55.000868       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.008771       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.064072       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.160813       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.161346       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.203093       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.216624       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.271875       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.322063       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.327486       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.480889       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.493647       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.522263       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.530729       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.562460       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.578873       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.600094       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.625869       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.693248       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.698933       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.851040       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:55.901960       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:56.031066       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:56.102718       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:33:56.191271       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fb4506ce769245994e30842e485ac09f3de96303c68d5c1beaef90f8b8a35946] <==
	I0815 01:47:02.804642       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:47:02.804696       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:49:01.803958       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:49:01.804354       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0815 01:49:02.807241       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:49:02.807357       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0815 01:49:02.807421       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:49:02.807498       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:49:02.808632       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:49:02.808713       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:50:02.809239       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:50:02.809313       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0815 01:50:02.809365       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:50:02.809391       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:50:02.810442       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:50:02.810490       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1aa99cc6c43fc2f9a4455c9f2ed3323fea6bd332c4e85ee9fe56851a182d64b7] <==
	I0815 01:45:09.372911       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:45:14.413480       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="331.717µs"
	I0815 01:45:29.411505       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="50.08µs"
	E0815 01:45:38.894683       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:45:39.380551       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:46:08.901401       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:46:09.388960       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:46:38.907814       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:46:39.397153       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:47:08.913341       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:47:09.404531       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:47:38.919089       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:47:39.413335       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:48:08.926351       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:48:09.420782       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:48:38.934263       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:48:39.430657       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:49:08.940475       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:49:09.438082       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:49:32.527561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-190398"
	E0815 01:49:38.947928       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:49:39.447904       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:50:08.956417       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:50:09.458387       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:50:25.413846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="317.316µs"
	
	
	==> kube-proxy [31fff97fba4f249d22ae559a3fe50e7b931e5c20404aaacbfc8a4ab2e147a813] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:34:10.674829       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 01:34:10.694038       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.151"]
	E0815 01:34:10.694131       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 01:34:10.949426       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 01:34:10.949513       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 01:34:10.949586       1 server_linux.go:169] "Using iptables Proxier"
	I0815 01:34:10.959037       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 01:34:10.963067       1 server.go:483] "Version info" version="v1.31.0"
	I0815 01:34:10.976764       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:34:10.992558       1 config.go:197] "Starting service config controller"
	I0815 01:34:10.999849       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 01:34:10.999937       1 config.go:104] "Starting endpoint slice config controller"
	I0815 01:34:10.999946       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 01:34:11.000527       1 config.go:326] "Starting node config controller"
	I0815 01:34:11.000535       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 01:34:11.101310       1 shared_informer.go:320] Caches are synced for node config
	I0815 01:34:11.101407       1 shared_informer.go:320] Caches are synced for service config
	I0815 01:34:11.101458       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [18cc0a0d4ab8d0c4b6af0fba77cc19d18df1c7fa7512f15ed521c1dae749f1d5] <==
	W0815 01:34:01.849127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 01:34:01.849160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:01.849247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 01:34:01.849281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:01.850241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 01:34:01.850285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:01.850464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 01:34:01.850502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:01.850548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 01:34:01.850563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:02.751989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 01:34:02.752166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:02.827223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 01:34:02.827330       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:02.838415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 01:34:02.838512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:02.849133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 01:34:02.849226       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:02.922378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 01:34:02.922428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:02.953605       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 01:34:02.953653       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 01:34:03.024363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 01:34:03.024412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0815 01:34:05.928391       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 01:49:32 embed-certs-190398 kubelet[2907]: E0815 01:49:32.396532    2907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4ldv7" podUID="ea1c5492-373d-445c-a135-b91569186449"
	Aug 15 01:49:34 embed-certs-190398 kubelet[2907]: E0815 01:49:34.596761    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686574596348873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:49:34 embed-certs-190398 kubelet[2907]: E0815 01:49:34.597021    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686574596348873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:49:44 embed-certs-190398 kubelet[2907]: E0815 01:49:44.598990    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686584598694379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:49:44 embed-certs-190398 kubelet[2907]: E0815 01:49:44.599028    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686584598694379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:49:45 embed-certs-190398 kubelet[2907]: E0815 01:49:45.395408    2907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4ldv7" podUID="ea1c5492-373d-445c-a135-b91569186449"
	Aug 15 01:49:54 embed-certs-190398 kubelet[2907]: E0815 01:49:54.601076    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686594600724995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:49:54 embed-certs-190398 kubelet[2907]: E0815 01:49:54.601111    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686594600724995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:49:59 embed-certs-190398 kubelet[2907]: E0815 01:49:59.396939    2907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4ldv7" podUID="ea1c5492-373d-445c-a135-b91569186449"
	Aug 15 01:50:04 embed-certs-190398 kubelet[2907]: E0815 01:50:04.412131    2907 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 01:50:04 embed-certs-190398 kubelet[2907]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 01:50:04 embed-certs-190398 kubelet[2907]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 01:50:04 embed-certs-190398 kubelet[2907]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 01:50:04 embed-certs-190398 kubelet[2907]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 01:50:04 embed-certs-190398 kubelet[2907]: E0815 01:50:04.602031    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686604601798655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:50:04 embed-certs-190398 kubelet[2907]: E0815 01:50:04.602065    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686604601798655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:50:13 embed-certs-190398 kubelet[2907]: E0815 01:50:13.408100    2907 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 15 01:50:13 embed-certs-190398 kubelet[2907]: E0815 01:50:13.408505    2907 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 15 01:50:13 embed-certs-190398 kubelet[2907]: E0815 01:50:13.408840    2907 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7q9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-4ldv7_kube-system(ea1c5492-373d-445c-a135-b91569186449): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 15 01:50:13 embed-certs-190398 kubelet[2907]: E0815 01:50:13.410288    2907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-4ldv7" podUID="ea1c5492-373d-445c-a135-b91569186449"
	Aug 15 01:50:14 embed-certs-190398 kubelet[2907]: E0815 01:50:14.603871    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686614603588713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:50:14 embed-certs-190398 kubelet[2907]: E0815 01:50:14.603924    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686614603588713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:50:24 embed-certs-190398 kubelet[2907]: E0815 01:50:24.605913    2907 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686624605570349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:50:24 embed-certs-190398 kubelet[2907]: E0815 01:50:24.605986    2907 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686624605570349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:50:25 embed-certs-190398 kubelet[2907]: E0815 01:50:25.396580    2907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4ldv7" podUID="ea1c5492-373d-445c-a135-b91569186449"
	
	
	==> storage-provisioner [7e19c80b54c6a0fd2f130825b9928566ec4fd02360f7e7ceb57baebfb1f9ecde] <==
	I0815 01:34:11.666057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 01:34:11.678664       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 01:34:11.678784       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 01:34:11.691123       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 01:34:11.691267       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"834cb5be-434c-4bf7-93c0-c8e1bed0fb8c", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-190398_2b6cb8f1-cfd7-4443-84f8-49ea296b44b4 became leader
	I0815 01:34:11.691441       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-190398_2b6cb8f1-cfd7-4443-84f8-49ea296b44b4!
	I0815 01:34:11.792228       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-190398_2b6cb8f1-cfd7-4443-84f8-49ea296b44b4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-190398 -n embed-certs-190398
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-190398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4ldv7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-190398 describe pod metrics-server-6867b74b74-4ldv7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-190398 describe pod metrics-server-6867b74b74-4ldv7: exit status 1 (61.877725ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4ldv7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-190398 describe pod metrics-server-6867b74b74-4ldv7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (428.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (313.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-884893 -n no-preload-884893
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-15 01:49:25.959710722 +0000 UTC m=+6240.659942319
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-884893 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-884893 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.569µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-884893 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-884893 -n no-preload-884893
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-884893 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-884893 logs -n 25: (1.280422344s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-884893             | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-294760 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | disable-driver-mounts-294760                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:23 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-190398            | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-390782        | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018537  | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-884893                  | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-190398                 | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-390782             | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018537       | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:26 UTC | 15 Aug 24 01:34 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:48 UTC | 15 Aug 24 01:48 UTC |
	| start   | -p newest-cni-840156 --memory=2200 --alsologtostderr   | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:48 UTC | 15 Aug 24 01:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-840156             | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC | 15 Aug 24 01:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-840156                                   | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC | 15 Aug 24 01:49 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-840156                  | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC | 15 Aug 24 01:49 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-840156 --memory=2200 --alsologtostderr   | newest-cni-840156            | jenkins | v1.33.1 | 15 Aug 24 01:49 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:49:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:49:18.064020   73914 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:49:18.064456   73914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:49:18.064474   73914 out.go:304] Setting ErrFile to fd 2...
	I0815 01:49:18.064482   73914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:49:18.064999   73914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:49:18.066016   73914 out.go:298] Setting JSON to false
	I0815 01:49:18.066948   73914 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9103,"bootTime":1723677455,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:49:18.067005   73914 start.go:139] virtualization: kvm guest
	I0815 01:49:18.068748   73914 out.go:177] * [newest-cni-840156] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:49:18.070167   73914 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:49:18.070173   73914 notify.go:220] Checking for updates...
	I0815 01:49:18.072327   73914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:49:18.073427   73914 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:49:18.074670   73914 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:49:18.075791   73914 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:49:18.076818   73914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:49:18.078075   73914 config.go:182] Loaded profile config "newest-cni-840156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:49:18.078456   73914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:49:18.078503   73914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:49:18.093514   73914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
	I0815 01:49:18.093930   73914 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:49:18.094385   73914 main.go:141] libmachine: Using API Version  1
	I0815 01:49:18.094406   73914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:49:18.094763   73914 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:49:18.094950   73914 main.go:141] libmachine: (newest-cni-840156) Calling .DriverName
	I0815 01:49:18.095181   73914 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:49:18.095466   73914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:49:18.095498   73914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:49:18.109709   73914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0815 01:49:18.110073   73914 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:49:18.110579   73914 main.go:141] libmachine: Using API Version  1
	I0815 01:49:18.110605   73914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:49:18.110894   73914 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:49:18.111076   73914 main.go:141] libmachine: (newest-cni-840156) Calling .DriverName
	I0815 01:49:18.145857   73914 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 01:49:18.147043   73914 start.go:297] selected driver: kvm2
	I0815 01:49:18.147066   73914 start.go:901] validating driver "kvm2" against &{Name:newest-cni-840156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-840156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:49:18.147184   73914 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:49:18.148061   73914 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:49:18.148144   73914 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:49:18.162423   73914 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:49:18.162891   73914 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0815 01:49:18.162969   73914 cni.go:84] Creating CNI manager for ""
	I0815 01:49:18.162982   73914 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:49:18.163022   73914 start.go:340] cluster config:
	{Name:newest-cni-840156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-840156 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:49:18.163115   73914 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:49:18.164755   73914 out.go:177] * Starting "newest-cni-840156" primary control-plane node in "newest-cni-840156" cluster
	I0815 01:49:18.165861   73914 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:49:18.165894   73914 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:49:18.165902   73914 cache.go:56] Caching tarball of preloaded images
	I0815 01:49:18.165966   73914 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:49:18.165976   73914 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:49:18.166070   73914 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/newest-cni-840156/config.json ...
	I0815 01:49:18.166239   73914 start.go:360] acquireMachinesLock for newest-cni-840156: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:49:18.166282   73914 start.go:364] duration metric: took 27.699µs to acquireMachinesLock for "newest-cni-840156"
	I0815 01:49:18.166295   73914 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:49:18.166304   73914 fix.go:54] fixHost starting: 
	I0815 01:49:18.166563   73914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:49:18.166592   73914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:49:18.181347   73914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35421
	I0815 01:49:18.181739   73914 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:49:18.182226   73914 main.go:141] libmachine: Using API Version  1
	I0815 01:49:18.182247   73914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:49:18.182663   73914 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:49:18.182887   73914 main.go:141] libmachine: (newest-cni-840156) Calling .DriverName
	I0815 01:49:18.183020   73914 main.go:141] libmachine: (newest-cni-840156) Calling .GetState
	I0815 01:49:18.184704   73914 fix.go:112] recreateIfNeeded on newest-cni-840156: state=Stopped err=<nil>
	I0815 01:49:18.184740   73914 main.go:141] libmachine: (newest-cni-840156) Calling .DriverName
	W0815 01:49:18.185066   73914 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:49:18.187694   73914 out.go:177] * Restarting existing kvm2 VM for "newest-cni-840156" ...
	I0815 01:49:18.188823   73914 main.go:141] libmachine: (newest-cni-840156) Calling .Start
	I0815 01:49:18.189013   73914 main.go:141] libmachine: (newest-cni-840156) Ensuring networks are active...
	I0815 01:49:18.189882   73914 main.go:141] libmachine: (newest-cni-840156) Ensuring network default is active
	I0815 01:49:18.190212   73914 main.go:141] libmachine: (newest-cni-840156) Ensuring network mk-newest-cni-840156 is active
	I0815 01:49:18.190578   73914 main.go:141] libmachine: (newest-cni-840156) Getting domain xml...
	I0815 01:49:18.191445   73914 main.go:141] libmachine: (newest-cni-840156) Creating domain...
	I0815 01:49:19.428122   73914 main.go:141] libmachine: (newest-cni-840156) Waiting to get IP...
	I0815 01:49:19.429233   73914 main.go:141] libmachine: (newest-cni-840156) DBG | domain newest-cni-840156 has defined MAC address 52:54:00:9b:fa:41 in network mk-newest-cni-840156
	I0815 01:49:19.429656   73914 main.go:141] libmachine: (newest-cni-840156) DBG | unable to find current IP address of domain newest-cni-840156 in network mk-newest-cni-840156
	I0815 01:49:19.429715   73914 main.go:141] libmachine: (newest-cni-840156) DBG | I0815 01:49:19.429623   73949 retry.go:31] will retry after 237.416697ms: waiting for machine to come up
	I0815 01:49:19.669129   73914 main.go:141] libmachine: (newest-cni-840156) DBG | domain newest-cni-840156 has defined MAC address 52:54:00:9b:fa:41 in network mk-newest-cni-840156
	I0815 01:49:19.669627   73914 main.go:141] libmachine: (newest-cni-840156) DBG | unable to find current IP address of domain newest-cni-840156 in network mk-newest-cni-840156
	I0815 01:49:19.669659   73914 main.go:141] libmachine: (newest-cni-840156) DBG | I0815 01:49:19.669581   73949 retry.go:31] will retry after 354.351616ms: waiting for machine to come up
	I0815 01:49:20.025222   73914 main.go:141] libmachine: (newest-cni-840156) DBG | domain newest-cni-840156 has defined MAC address 52:54:00:9b:fa:41 in network mk-newest-cni-840156
	I0815 01:49:20.025714   73914 main.go:141] libmachine: (newest-cni-840156) DBG | unable to find current IP address of domain newest-cni-840156 in network mk-newest-cni-840156
	I0815 01:49:20.025741   73914 main.go:141] libmachine: (newest-cni-840156) DBG | I0815 01:49:20.025673   73949 retry.go:31] will retry after 418.95248ms: waiting for machine to come up
	I0815 01:49:20.446426   73914 main.go:141] libmachine: (newest-cni-840156) DBG | domain newest-cni-840156 has defined MAC address 52:54:00:9b:fa:41 in network mk-newest-cni-840156
	I0815 01:49:20.446908   73914 main.go:141] libmachine: (newest-cni-840156) DBG | unable to find current IP address of domain newest-cni-840156 in network mk-newest-cni-840156
	I0815 01:49:20.446938   73914 main.go:141] libmachine: (newest-cni-840156) DBG | I0815 01:49:20.446856   73949 retry.go:31] will retry after 607.928287ms: waiting for machine to come up
	I0815 01:49:21.056132   73914 main.go:141] libmachine: (newest-cni-840156) DBG | domain newest-cni-840156 has defined MAC address 52:54:00:9b:fa:41 in network mk-newest-cni-840156
	I0815 01:49:21.056631   73914 main.go:141] libmachine: (newest-cni-840156) DBG | unable to find current IP address of domain newest-cni-840156 in network mk-newest-cni-840156
	I0815 01:49:21.056693   73914 main.go:141] libmachine: (newest-cni-840156) DBG | I0815 01:49:21.056570   73949 retry.go:31] will retry after 587.841088ms: waiting for machine to come up
	I0815 01:49:21.646267   73914 main.go:141] libmachine: (newest-cni-840156) DBG | domain newest-cni-840156 has defined MAC address 52:54:00:9b:fa:41 in network mk-newest-cni-840156
	I0815 01:49:21.646774   73914 main.go:141] libmachine: (newest-cni-840156) DBG | unable to find current IP address of domain newest-cni-840156 in network mk-newest-cni-840156
	I0815 01:49:21.646803   73914 main.go:141] libmachine: (newest-cni-840156) DBG | I0815 01:49:21.646710   73949 retry.go:31] will retry after 850.707402ms: waiting for machine to come up
	I0815 01:49:22.498697   73914 main.go:141] libmachine: (newest-cni-840156) DBG | domain newest-cni-840156 has defined MAC address 52:54:00:9b:fa:41 in network mk-newest-cni-840156
	I0815 01:49:22.499184   73914 main.go:141] libmachine: (newest-cni-840156) DBG | unable to find current IP address of domain newest-cni-840156 in network mk-newest-cni-840156
	I0815 01:49:22.499216   73914 main.go:141] libmachine: (newest-cni-840156) DBG | I0815 01:49:22.499137   73949 retry.go:31] will retry after 1.154352445s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.582262204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686566582217790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fca32d99-f0f5-4fc2-906e-43f643bb7bb9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.583050654Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b685bcd-14fc-4c3a-92b3-a93261f94d36 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.583120853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b685bcd-14fc-4c3a-92b3-a93261f94d36 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.583336808Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dbba9667928e998c2a6815b23e55cd7f19614c817baa75eb5a7fa90b74bf8fb,PodSandboxId:5c7008c348c981b8763bcce7014b8e72fe463b3fc71862b86b18640c9543ab98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685700842744173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpggv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ef2a4b-a502-452d-a3bd-df1209ff247b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c2ab89a084236599ade963c094c43b3745cdd87df29638978ec4cf68957944,PodSandboxId:dee5eaae9cbd5f8a6eafba097553b303e1cca6c9aa3d81dba2a63bef2d105a59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685700818266451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4cf6d02-281f-4fb5-9ff7-c36143d3af58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f535bcfc2a08c0ab6b5aeada0fa617c10da62116b4e6d37d601e7a97d18809,PodSandboxId:3e54b8667374b243940f10a001097777e7529e107fc377729ccc2509d54be696,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685700053594174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t77b6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcdf11ef-28a6-428c-b033-e29b51af8f0e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc1478a68861312c3eacec272d52a11124ec054eb7b45546bb5f14f89765a7,PodSandboxId:e42f3999b768805fd19ff1b4cdbb819147972df9724fee70ee2cf6152101e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685699889435196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-srq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9520ab8-24d6-410d-bcba-b59e91e817a9
,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad0ed3214dba2d76fd07d6e4f7e064c62164b9d0fb194310d402ca42645d018,PodSandboxId:2724a4b97b2c71cadc08736d7b3584e4c160d7c9f8f91615d5d322ccd219a174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685688545498293,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581ca8baf5f892066c4d1398ac6249c2306a4fb271e16df19126993e37f0a8c0,PodSandboxId:ed4b9c791d8001822698e0f53309ed7c7cf5617989525033958bc9d5cd4f2fa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685688609158451,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9abe2f26e4b74b3ad848d6c1c0015a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62400e7ad56261cc5a4b278617b4f2707f9b28fcb877ff9c8d215aa10030dea4,PodSandboxId:0eb77cab445568c43765c0c932600e85b9fb84d989e30fb00d4e5245e43dd6d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685688601004368,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c0f929f550e2126a4510bc015889c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7b4ee82b9e619b75aa4a1345513619e5cb870d25e0fa3995118c4e585f425d,PodSandboxId:4ff603c7525170ae77c5b4aa9130dd477747bf6d38b0c3dd928638dd35e2cd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685688547601151,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b8187b7ca4df4fe0b938492f06768c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b854d9fa0003c8e3fe7a1437d6f19611f461fe908b1c82cd65f87158173785,PodSandboxId:fe23604f5c8575a4e645973c6bb989b7a45b12ce694025c224cf6882438874ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685407987691766,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b685bcd-14fc-4c3a-92b3-a93261f94d36 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.634081272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca080b2d-2e4d-42cf-90c7-876b4b003c33 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.634186431Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca080b2d-2e4d-42cf-90c7-876b4b003c33 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.635333921Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6736b3ce-c850-4139-954d-3ef7dfe4837f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.635989817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686566635935496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6736b3ce-c850-4139-954d-3ef7dfe4837f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.636691945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ae24bd6-c0f0-4d93-a6ff-7e665354941a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.636831609Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ae24bd6-c0f0-4d93-a6ff-7e665354941a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.637140310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dbba9667928e998c2a6815b23e55cd7f19614c817baa75eb5a7fa90b74bf8fb,PodSandboxId:5c7008c348c981b8763bcce7014b8e72fe463b3fc71862b86b18640c9543ab98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685700842744173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpggv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ef2a4b-a502-452d-a3bd-df1209ff247b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c2ab89a084236599ade963c094c43b3745cdd87df29638978ec4cf68957944,PodSandboxId:dee5eaae9cbd5f8a6eafba097553b303e1cca6c9aa3d81dba2a63bef2d105a59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685700818266451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4cf6d02-281f-4fb5-9ff7-c36143d3af58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f535bcfc2a08c0ab6b5aeada0fa617c10da62116b4e6d37d601e7a97d18809,PodSandboxId:3e54b8667374b243940f10a001097777e7529e107fc377729ccc2509d54be696,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685700053594174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t77b6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcdf11ef-28a6-428c-b033-e29b51af8f0e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc1478a68861312c3eacec272d52a11124ec054eb7b45546bb5f14f89765a7,PodSandboxId:e42f3999b768805fd19ff1b4cdbb819147972df9724fee70ee2cf6152101e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685699889435196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-srq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9520ab8-24d6-410d-bcba-b59e91e817a9
,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad0ed3214dba2d76fd07d6e4f7e064c62164b9d0fb194310d402ca42645d018,PodSandboxId:2724a4b97b2c71cadc08736d7b3584e4c160d7c9f8f91615d5d322ccd219a174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685688545498293,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581ca8baf5f892066c4d1398ac6249c2306a4fb271e16df19126993e37f0a8c0,PodSandboxId:ed4b9c791d8001822698e0f53309ed7c7cf5617989525033958bc9d5cd4f2fa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685688609158451,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9abe2f26e4b74b3ad848d6c1c0015a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62400e7ad56261cc5a4b278617b4f2707f9b28fcb877ff9c8d215aa10030dea4,PodSandboxId:0eb77cab445568c43765c0c932600e85b9fb84d989e30fb00d4e5245e43dd6d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685688601004368,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c0f929f550e2126a4510bc015889c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7b4ee82b9e619b75aa4a1345513619e5cb870d25e0fa3995118c4e585f425d,PodSandboxId:4ff603c7525170ae77c5b4aa9130dd477747bf6d38b0c3dd928638dd35e2cd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685688547601151,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b8187b7ca4df4fe0b938492f06768c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b854d9fa0003c8e3fe7a1437d6f19611f461fe908b1c82cd65f87158173785,PodSandboxId:fe23604f5c8575a4e645973c6bb989b7a45b12ce694025c224cf6882438874ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685407987691766,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ae24bd6-c0f0-4d93-a6ff-7e665354941a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.682698958Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97a79bee-b492-4509-9490-6089a955cc0d name=/runtime.v1.RuntimeService/Version
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.682878113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97a79bee-b492-4509-9490-6089a955cc0d name=/runtime.v1.RuntimeService/Version
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.691206256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0ed0caa-9161-4d41-9861-ca27ee631e17 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.691897720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686566691767358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0ed0caa-9161-4d41-9861-ca27ee631e17 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.692493406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75ee1c1b-1966-44ea-8843-364bc6d52639 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.692592198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75ee1c1b-1966-44ea-8843-364bc6d52639 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.693081802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dbba9667928e998c2a6815b23e55cd7f19614c817baa75eb5a7fa90b74bf8fb,PodSandboxId:5c7008c348c981b8763bcce7014b8e72fe463b3fc71862b86b18640c9543ab98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685700842744173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpggv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ef2a4b-a502-452d-a3bd-df1209ff247b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c2ab89a084236599ade963c094c43b3745cdd87df29638978ec4cf68957944,PodSandboxId:dee5eaae9cbd5f8a6eafba097553b303e1cca6c9aa3d81dba2a63bef2d105a59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685700818266451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4cf6d02-281f-4fb5-9ff7-c36143d3af58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f535bcfc2a08c0ab6b5aeada0fa617c10da62116b4e6d37d601e7a97d18809,PodSandboxId:3e54b8667374b243940f10a001097777e7529e107fc377729ccc2509d54be696,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685700053594174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t77b6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcdf11ef-28a6-428c-b033-e29b51af8f0e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc1478a68861312c3eacec272d52a11124ec054eb7b45546bb5f14f89765a7,PodSandboxId:e42f3999b768805fd19ff1b4cdbb819147972df9724fee70ee2cf6152101e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685699889435196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-srq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9520ab8-24d6-410d-bcba-b59e91e817a9
,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad0ed3214dba2d76fd07d6e4f7e064c62164b9d0fb194310d402ca42645d018,PodSandboxId:2724a4b97b2c71cadc08736d7b3584e4c160d7c9f8f91615d5d322ccd219a174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685688545498293,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581ca8baf5f892066c4d1398ac6249c2306a4fb271e16df19126993e37f0a8c0,PodSandboxId:ed4b9c791d8001822698e0f53309ed7c7cf5617989525033958bc9d5cd4f2fa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685688609158451,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9abe2f26e4b74b3ad848d6c1c0015a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62400e7ad56261cc5a4b278617b4f2707f9b28fcb877ff9c8d215aa10030dea4,PodSandboxId:0eb77cab445568c43765c0c932600e85b9fb84d989e30fb00d4e5245e43dd6d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685688601004368,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c0f929f550e2126a4510bc015889c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7b4ee82b9e619b75aa4a1345513619e5cb870d25e0fa3995118c4e585f425d,PodSandboxId:4ff603c7525170ae77c5b4aa9130dd477747bf6d38b0c3dd928638dd35e2cd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685688547601151,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b8187b7ca4df4fe0b938492f06768c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b854d9fa0003c8e3fe7a1437d6f19611f461fe908b1c82cd65f87158173785,PodSandboxId:fe23604f5c8575a4e645973c6bb989b7a45b12ce694025c224cf6882438874ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685407987691766,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75ee1c1b-1966-44ea-8843-364bc6d52639 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.733007258Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7662bedb-947f-4c56-88c5-bddcedff1114 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.733129724Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7662bedb-947f-4c56-88c5-bddcedff1114 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.734509118Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=84e23eb2-d3d0-4b85-a65e-3ce1a995229b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.735261277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686566735149700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=84e23eb2-d3d0-4b85-a65e-3ce1a995229b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.736101569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35b53040-9757-438e-bd29-69e85c9666ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.736205234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35b53040-9757-438e-bd29-69e85c9666ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:49:26 no-preload-884893 crio[725]: time="2024-08-15 01:49:26.737897182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dbba9667928e998c2a6815b23e55cd7f19614c817baa75eb5a7fa90b74bf8fb,PodSandboxId:5c7008c348c981b8763bcce7014b8e72fe463b3fc71862b86b18640c9543ab98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723685700842744173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpggv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55ef2a4b-a502-452d-a3bd-df1209ff247b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c2ab89a084236599ade963c094c43b3745cdd87df29638978ec4cf68957944,PodSandboxId:dee5eaae9cbd5f8a6eafba097553b303e1cca6c9aa3d81dba2a63bef2d105a59,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723685700818266451,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4cf6d02-281f-4fb5-9ff7-c36143d3af58,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f535bcfc2a08c0ab6b5aeada0fa617c10da62116b4e6d37d601e7a97d18809,PodSandboxId:3e54b8667374b243940f10a001097777e7529e107fc377729ccc2509d54be696,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685700053594174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t77b6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcdf11ef-28a6-428c-b033-e29b51af8f0e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc1478a68861312c3eacec272d52a11124ec054eb7b45546bb5f14f89765a7,PodSandboxId:e42f3999b768805fd19ff1b4cdbb819147972df9724fee70ee2cf6152101e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723685699889435196,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-srq48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9520ab8-24d6-410d-bcba-b59e91e817a9
,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad0ed3214dba2d76fd07d6e4f7e064c62164b9d0fb194310d402ca42645d018,PodSandboxId:2724a4b97b2c71cadc08736d7b3584e4c160d7c9f8f91615d5d322ccd219a174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723685688545498293,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:581ca8baf5f892066c4d1398ac6249c2306a4fb271e16df19126993e37f0a8c0,PodSandboxId:ed4b9c791d8001822698e0f53309ed7c7cf5617989525033958bc9d5cd4f2fa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723685688609158451,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e9abe2f26e4b74b3ad848d6c1c0015a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62400e7ad56261cc5a4b278617b4f2707f9b28fcb877ff9c8d215aa10030dea4,PodSandboxId:0eb77cab445568c43765c0c932600e85b9fb84d989e30fb00d4e5245e43dd6d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723685688601004368,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c0f929f550e2126a4510bc015889c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7b4ee82b9e619b75aa4a1345513619e5cb870d25e0fa3995118c4e585f425d,PodSandboxId:4ff603c7525170ae77c5b4aa9130dd477747bf6d38b0c3dd928638dd35e2cd38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723685688547601151,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b8187b7ca4df4fe0b938492f06768c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b854d9fa0003c8e3fe7a1437d6f19611f461fe908b1c82cd65f87158173785,PodSandboxId:fe23604f5c8575a4e645973c6bb989b7a45b12ce694025c224cf6882438874ee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723685407987691766,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-884893,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd79e64eac9c2de03f14528257d9e3e,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35b53040-9757-438e-bd29-69e85c9666ba name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4dbba9667928e       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   14 minutes ago      Running             kube-proxy                0                   5c7008c348c98       kube-proxy-dpggv
	57c2ab89a0842       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   dee5eaae9cbd5       storage-provisioner
	f4f535bcfc2a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   3e54b8667374b       coredns-6f6b679f8f-t77b6
	79fc1478a6886       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   e42f3999b7688       coredns-6f6b679f8f-srq48
	581ca8baf5f89       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   ed4b9c791d800       etcd-no-preload-884893
	62400e7ad5626       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   14 minutes ago      Running             kube-controller-manager   2                   0eb77cab44556       kube-controller-manager-no-preload-884893
	7c7b4ee82b9e6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   14 minutes ago      Running             kube-scheduler            2                   4ff603c752517       kube-scheduler-no-preload-884893
	3ad0ed3214dba       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Running             kube-apiserver            2                   2724a4b97b2c7       kube-apiserver-no-preload-884893
	49b854d9fa000       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   19 minutes ago      Exited              kube-apiserver            1                   fe23604f5c857       kube-apiserver-no-preload-884893
	
	
	==> coredns [79fc1478a68861312c3eacec272d52a11124ec054eb7b45546bb5f14f89765a7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f4f535bcfc2a08c0ab6b5aeada0fa617c10da62116b4e6d37d601e7a97d18809] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-884893
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-884893
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168
	                    minikube.k8s.io/name=no-preload-884893
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T01_34_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 01:34:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-884893
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 01:49:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 01:45:16 +0000   Thu, 15 Aug 2024 01:34:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 01:45:16 +0000   Thu, 15 Aug 2024 01:34:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 01:45:16 +0000   Thu, 15 Aug 2024 01:34:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 01:45:16 +0000   Thu, 15 Aug 2024 01:34:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.166
	  Hostname:    no-preload-884893
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b85121e7c83470e9872f0b2990e5486
	  System UUID:                0b85121e-7c83-470e-9872-f0b2990e5486
	  Boot ID:                    edd7858c-2fa1-497f-b295-6f7fd2f899e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-srq48                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-6f6b679f8f-t77b6                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-884893                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-884893             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-884893    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-dpggv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-884893             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-6867b74b74-w47b2              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-884893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-884893 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-884893 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-884893 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-884893 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-884893 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-884893 event: Registered Node no-preload-884893 in Controller
	
	
	==> dmesg <==
	[  +0.052228] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039079] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.839275] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.854598] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.527464] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.624334] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.055295] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056479] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.197619] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.129485] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.284437] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[Aug15 01:30] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +0.064506] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.782167] systemd-fstab-generator[1434]: Ignoring "noauto" option for root device
	[  +5.594688] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.801928] kauditd_printk_skb: 85 callbacks suppressed
	[Aug15 01:34] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.813940] systemd-fstab-generator[3061]: Ignoring "noauto" option for root device
	[  +4.561994] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.481705] systemd-fstab-generator[3381]: Ignoring "noauto" option for root device
	[  +5.863050] systemd-fstab-generator[3504]: Ignoring "noauto" option for root device
	[  +0.099771] kauditd_printk_skb: 14 callbacks suppressed
	[Aug15 01:35] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [581ca8baf5f892066c4d1398ac6249c2306a4fb271e16df19126993e37f0a8c0] <==
	{"level":"info","ts":"2024-08-15T01:34:49.022168Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.166:2380"}
	{"level":"info","ts":"2024-08-15T01:34:49.022201Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.166:2380"}
	{"level":"info","ts":"2024-08-15T01:34:49.140796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e532d532ae69e491 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-15T01:34:49.140927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e532d532ae69e491 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-15T01:34:49.140974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e532d532ae69e491 received MsgPreVoteResp from e532d532ae69e491 at term 1"}
	{"level":"info","ts":"2024-08-15T01:34:49.141019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e532d532ae69e491 became candidate at term 2"}
	{"level":"info","ts":"2024-08-15T01:34:49.141046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e532d532ae69e491 received MsgVoteResp from e532d532ae69e491 at term 2"}
	{"level":"info","ts":"2024-08-15T01:34:49.141075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e532d532ae69e491 became leader at term 2"}
	{"level":"info","ts":"2024-08-15T01:34:49.141103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e532d532ae69e491 elected leader e532d532ae69e491 at term 2"}
	{"level":"info","ts":"2024-08-15T01:34:49.145892Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:34:49.148001Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e532d532ae69e491","local-member-attributes":"{Name:no-preload-884893 ClientURLs:[https://192.168.61.166:2379]}","request-path":"/0/members/e532d532ae69e491/attributes","cluster-id":"f878173fc0af8a15","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T01:34:49.148228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:34:49.148560Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T01:34:49.149626Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f878173fc0af8a15","local-member-id":"e532d532ae69e491","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:34:49.159673Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:34:49.151801Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T01:34:49.152407Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:34:49.159264Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T01:34:49.163827Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T01:34:49.163873Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T01:34:49.164642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T01:34:49.167466Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.166:2379"}
	{"level":"info","ts":"2024-08-15T01:44:49.895522Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":715}
	{"level":"info","ts":"2024-08-15T01:44:49.904790Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":715,"took":"8.353163ms","hash":825133874,"current-db-size-bytes":2125824,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2125824,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-08-15T01:44:49.904906Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":825133874,"revision":715,"compact-revision":-1}
	
	
	==> kernel <==
	 01:49:27 up 19 min,  0 users,  load average: 0.18, 0.29, 0.20
	Linux no-preload-884893 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3ad0ed3214dba2d76fd07d6e4f7e064c62164b9d0fb194310d402ca42645d018] <==
	W0815 01:44:52.248504       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:44:52.248555       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:44:52.249656       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:44:52.249695       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:45:52.250504       1 handler_proxy.go:99] no RequestInfo found in the context
	W0815 01:45:52.250523       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:45:52.250793       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0815 01:45:52.250859       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:45:52.251973       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:45:52.252041       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 01:47:52.252556       1 handler_proxy.go:99] no RequestInfo found in the context
	W0815 01:47:52.252557       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 01:47:52.253012       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0815 01:47:52.253136       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 01:47:52.254188       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 01:47:52.254330       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [49b854d9fa0003c8e3fe7a1437d6f19611f461fe908b1c82cd65f87158173785] <==
	W0815 01:34:43.888297       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:43.894874       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:43.944354       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:43.954459       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:43.984349       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:43.992907       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.009578       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.027082       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.044700       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.059074       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.090170       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.097783       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.101078       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.117997       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.118067       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.121431       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.144296       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.156652       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.194103       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.202442       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.202674       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.209977       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.242434       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.394129       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 01:34:44.640008       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [62400e7ad56261cc5a4b278617b4f2707f9b28fcb877ff9c8d215aa10030dea4] <==
	E0815 01:43:58.307119       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:43:58.754791       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:44:28.313676       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:44:28.762499       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:44:58.319846       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:44:58.769532       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:45:16.949617       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-884893"
	E0815 01:45:28.326472       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:45:28.776530       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:45:58.333869       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:45:58.784947       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 01:46:01.748064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="214.912µs"
	I0815 01:46:14.740878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="70.724µs"
	E0815 01:46:28.341050       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:46:28.792941       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:46:58.348103       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:46:58.800758       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:47:28.354472       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:47:28.809249       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:47:58.361271       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:47:58.816531       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:48:28.367790       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:48:28.825350       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 01:48:58.376151       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 01:48:58.835219       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4dbba9667928e998c2a6815b23e55cd7f19614c817baa75eb5a7fa90b74bf8fb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 01:35:01.148101       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 01:35:01.161834       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.166"]
	E0815 01:35:01.161931       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 01:35:01.218897       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 01:35:01.218940       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 01:35:01.218968       1 server_linux.go:169] "Using iptables Proxier"
	I0815 01:35:01.223202       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 01:35:01.223539       1 server.go:483] "Version info" version="v1.31.0"
	I0815 01:35:01.223565       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 01:35:01.225092       1 config.go:197] "Starting service config controller"
	I0815 01:35:01.225142       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 01:35:01.225179       1 config.go:104] "Starting endpoint slice config controller"
	I0815 01:35:01.225183       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 01:35:01.227499       1 config.go:326] "Starting node config controller"
	I0815 01:35:01.227568       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 01:35:01.325403       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 01:35:01.325521       1 shared_informer.go:320] Caches are synced for service config
	I0815 01:35:01.328010       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7c7b4ee82b9e619b75aa4a1345513619e5cb870d25e0fa3995118c4e585f425d] <==
	W0815 01:34:52.159925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 01:34:52.160034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.187533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 01:34:52.187954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.204939       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 01:34:52.205010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.312789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 01:34:52.312928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.361340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 01:34:52.361382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.368831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 01:34:52.368948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.403441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 01:34:52.403662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.515628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 01:34:52.516123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.549321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 01:34:52.549497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.556568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 01:34:52.556622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.589771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 01:34:52.589819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 01:34:52.824918       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 01:34:52.824980       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 01:34:54.566269       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 01:48:21 no-preload-884893 kubelet[3388]: E0815 01:48:21.726234    3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w47b2" podUID="7423be62-ae01-4b3f-9e24-049f4788f32f"
	Aug 15 01:48:23 no-preload-884893 kubelet[3388]: E0815 01:48:23.931541    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686503929956625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:48:23 no-preload-884893 kubelet[3388]: E0815 01:48:23.931944    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686503929956625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:48:32 no-preload-884893 kubelet[3388]: E0815 01:48:32.726221    3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w47b2" podUID="7423be62-ae01-4b3f-9e24-049f4788f32f"
	Aug 15 01:48:33 no-preload-884893 kubelet[3388]: E0815 01:48:33.933828    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686513932966214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:48:33 no-preload-884893 kubelet[3388]: E0815 01:48:33.933855    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686513932966214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:48:43 no-preload-884893 kubelet[3388]: E0815 01:48:43.727280    3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w47b2" podUID="7423be62-ae01-4b3f-9e24-049f4788f32f"
	Aug 15 01:48:43 no-preload-884893 kubelet[3388]: E0815 01:48:43.935202    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686523934968240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:48:43 no-preload-884893 kubelet[3388]: E0815 01:48:43.935263    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686523934968240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:48:53 no-preload-884893 kubelet[3388]: E0815 01:48:53.736900    3388 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 01:48:53 no-preload-884893 kubelet[3388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 01:48:53 no-preload-884893 kubelet[3388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 01:48:53 no-preload-884893 kubelet[3388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 01:48:53 no-preload-884893 kubelet[3388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 01:48:53 no-preload-884893 kubelet[3388]: E0815 01:48:53.936467    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686533936202184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:48:53 no-preload-884893 kubelet[3388]: E0815 01:48:53.936490    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686533936202184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:48:55 no-preload-884893 kubelet[3388]: E0815 01:48:55.725611    3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w47b2" podUID="7423be62-ae01-4b3f-9e24-049f4788f32f"
	Aug 15 01:49:03 no-preload-884893 kubelet[3388]: E0815 01:49:03.937829    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686543937444062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:49:03 no-preload-884893 kubelet[3388]: E0815 01:49:03.937856    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686543937444062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:49:10 no-preload-884893 kubelet[3388]: E0815 01:49:10.725197    3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w47b2" podUID="7423be62-ae01-4b3f-9e24-049f4788f32f"
	Aug 15 01:49:13 no-preload-884893 kubelet[3388]: E0815 01:49:13.939995    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686553939523077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:49:13 no-preload-884893 kubelet[3388]: E0815 01:49:13.940049    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686553939523077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:49:23 no-preload-884893 kubelet[3388]: E0815 01:49:23.941175    3388 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686563940935116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:49:23 no-preload-884893 kubelet[3388]: E0815 01:49:23.941217    3388 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686563940935116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 01:49:24 no-preload-884893 kubelet[3388]: E0815 01:49:24.724883    3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-w47b2" podUID="7423be62-ae01-4b3f-9e24-049f4788f32f"
	
	
	==> storage-provisioner [57c2ab89a084236599ade963c094c43b3745cdd87df29638978ec4cf68957944] <==
	I0815 01:35:01.030677       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 01:35:01.054167       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 01:35:01.054489       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 01:35:01.066838       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 01:35:01.068593       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-884893_0e27ce53-20bd-4b85-82c2-b055aaa97022!
	I0815 01:35:01.068694       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e77a760b-ddfd-47db-860c-05aaa5af85a2", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-884893_0e27ce53-20bd-4b85-82c2-b055aaa97022 became leader
	I0815 01:35:01.168877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-884893_0e27ce53-20bd-4b85-82c2-b055aaa97022!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-884893 -n no-preload-884893
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-884893 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-w47b2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-884893 describe pod metrics-server-6867b74b74-w47b2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-884893 describe pod metrics-server-6867b74b74-w47b2: exit status 1 (65.532315ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-w47b2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-884893 describe pod metrics-server-6867b74b74-w47b2: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (313.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (134.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
E0815 01:47:44.596681   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.21:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.21:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-390782 -n old-k8s-version-390782
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 2 (226.296768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-390782" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-390782 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-390782 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.9µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-390782 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 2 (219.186705ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-390782 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-390782 logs -n 25: (1.606702004s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:19 UTC | 15 Aug 24 01:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-146394                           | kubernetes-upgrade-146394    | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:20 UTC |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:20 UTC | 15 Aug 24 01:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-884893             | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-131152                              | cert-expiration-131152       | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	| delete  | -p                                                     | disable-driver-mounts-294760 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:21 UTC |
	|         | disable-driver-mounts-294760                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:21 UTC | 15 Aug 24 01:23 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-190398            | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC | 15 Aug 24 01:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-390782        | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:22 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-018537  | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-884893                  | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-884893                                   | no-preload-884893            | jenkins | v1.33.1 | 15 Aug 24 01:23 UTC | 15 Aug 24 01:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-190398                 | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-390782             | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-390782                              | old-k8s-version-390782       | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-190398                                  | embed-certs-190398           | jenkins | v1.33.1 | 15 Aug 24 01:24 UTC | 15 Aug 24 01:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-018537       | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-018537 | jenkins | v1.33.1 | 15 Aug 24 01:26 UTC | 15 Aug 24 01:34 UTC |
	|         | default-k8s-diff-port-018537                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 01:26:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 01:26:05.128952   67451 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:26:05.129201   67451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:26:05.129210   67451 out.go:304] Setting ErrFile to fd 2...
	I0815 01:26:05.129214   67451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:26:05.129371   67451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:26:05.129877   67451 out.go:298] Setting JSON to false
	I0815 01:26:05.130775   67451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7710,"bootTime":1723677455,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:26:05.130828   67451 start.go:139] virtualization: kvm guest
	I0815 01:26:05.133200   67451 out.go:177] * [default-k8s-diff-port-018537] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:26:05.134520   67451 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:26:05.134534   67451 notify.go:220] Checking for updates...
	I0815 01:26:05.136725   67451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:26:05.137871   67451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:26:05.138973   67451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:26:05.140126   67451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:26:05.141168   67451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:26:05.142477   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:26:05.142872   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:26:05.142931   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:26:05.157398   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0815 01:26:05.157792   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:26:05.158237   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:26:05.158271   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:26:05.158625   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:26:05.158791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:26:05.158998   67451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:26:05.159268   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:26:05.159298   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:26:05.173332   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0815 01:26:05.173671   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:26:05.174063   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:26:05.174085   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:26:05.174378   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:26:05.174558   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:26:05.209931   67451 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 01:26:04.417005   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:05.210993   67451 start.go:297] selected driver: kvm2
	I0815 01:26:05.211005   67451 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:05.211106   67451 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:26:05.211778   67451 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:26:05.211854   67451 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 01:26:05.226770   67451 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 01:26:05.227141   67451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:26:05.227174   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:26:05.227182   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:26:05.227228   67451 start.go:340] cluster config:
	{Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:26:05.227335   67451 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 01:26:05.228866   67451 out.go:177] * Starting "default-k8s-diff-port-018537" primary control-plane node in "default-k8s-diff-port-018537" cluster
	I0815 01:26:05.229784   67451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:26:05.229818   67451 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 01:26:05.229826   67451 cache.go:56] Caching tarball of preloaded images
	I0815 01:26:05.229905   67451 preload.go:172] Found /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 01:26:05.229916   67451 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 01:26:05.230017   67451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:26:05.230223   67451 start.go:360] acquireMachinesLock for default-k8s-diff-port-018537: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:26:07.488887   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:13.568939   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:16.640954   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:22.720929   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:25.792889   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:31.872926   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:34.944895   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:41.024886   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:44.096913   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:50.176957   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:53.249017   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:26:59.328928   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:02.400891   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:08.480935   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:11.552904   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:17.632939   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:20.704876   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:26.784922   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:29.856958   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:35.936895   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:39.008957   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:45.088962   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:48.160964   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:54.240971   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:27:57.312935   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:03.393014   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:06.464973   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:12.544928   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:15.616915   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:21.696904   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:24.768924   66492 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.166:22: connect: no route to host
	I0815 01:28:27.773197   66919 start.go:364] duration metric: took 3m57.538488178s to acquireMachinesLock for "old-k8s-version-390782"
	I0815 01:28:27.773249   66919 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:27.773269   66919 fix.go:54] fixHost starting: 
	I0815 01:28:27.773597   66919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:27.773632   66919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:27.788757   66919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0815 01:28:27.789155   66919 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:27.789612   66919 main.go:141] libmachine: Using API Version  1
	I0815 01:28:27.789645   66919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:27.789952   66919 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:27.790122   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:27.790265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetState
	I0815 01:28:27.791742   66919 fix.go:112] recreateIfNeeded on old-k8s-version-390782: state=Stopped err=<nil>
	I0815 01:28:27.791773   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	W0815 01:28:27.791930   66919 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:27.793654   66919 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-390782" ...
	I0815 01:28:27.794650   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .Start
	I0815 01:28:27.794798   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring networks are active...
	I0815 01:28:27.795554   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network default is active
	I0815 01:28:27.795835   66919 main.go:141] libmachine: (old-k8s-version-390782) Ensuring network mk-old-k8s-version-390782 is active
	I0815 01:28:27.796194   66919 main.go:141] libmachine: (old-k8s-version-390782) Getting domain xml...
	I0815 01:28:27.797069   66919 main.go:141] libmachine: (old-k8s-version-390782) Creating domain...
	I0815 01:28:28.999562   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting to get IP...
	I0815 01:28:29.000288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.000697   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.000787   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.000698   67979 retry.go:31] will retry after 209.337031ms: waiting for machine to come up
	I0815 01:28:29.212345   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.212839   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.212865   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.212796   67979 retry.go:31] will retry after 252.542067ms: waiting for machine to come up
	I0815 01:28:29.467274   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.467659   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.467685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.467607   67979 retry.go:31] will retry after 412.932146ms: waiting for machine to come up
	I0815 01:28:29.882217   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:29.882643   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:29.882672   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:29.882601   67979 retry.go:31] will retry after 526.991017ms: waiting for machine to come up
	I0815 01:28:27.770766   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:27.770800   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:28:27.771142   66492 buildroot.go:166] provisioning hostname "no-preload-884893"
	I0815 01:28:27.771173   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:28:27.771381   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:28:27.773059   66492 machine.go:97] duration metric: took 4m37.432079731s to provisionDockerMachine
	I0815 01:28:27.773102   66492 fix.go:56] duration metric: took 4m37.453608342s for fixHost
	I0815 01:28:27.773107   66492 start.go:83] releasing machines lock for "no-preload-884893", held for 4m37.453640668s
	W0815 01:28:27.773125   66492 start.go:714] error starting host: provision: host is not running
	W0815 01:28:27.773209   66492 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0815 01:28:27.773219   66492 start.go:729] Will try again in 5 seconds ...
	I0815 01:28:30.411443   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:30.411819   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:30.411881   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:30.411794   67979 retry.go:31] will retry after 758.953861ms: waiting for machine to come up
	I0815 01:28:31.172721   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.173099   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.173131   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.173045   67979 retry.go:31] will retry after 607.740613ms: waiting for machine to come up
	I0815 01:28:31.782922   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:31.783406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:31.783434   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:31.783343   67979 retry.go:31] will retry after 738.160606ms: waiting for machine to come up
	I0815 01:28:32.523257   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:32.523685   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:32.523716   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:32.523625   67979 retry.go:31] will retry after 904.54249ms: waiting for machine to come up
	I0815 01:28:33.430286   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:33.430690   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:33.430722   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:33.430637   67979 retry.go:31] will retry after 1.55058959s: waiting for machine to come up
	I0815 01:28:34.983386   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:34.983838   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:34.983870   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:34.983788   67979 retry.go:31] will retry after 1.636768205s: waiting for machine to come up
	I0815 01:28:32.775084   66492 start.go:360] acquireMachinesLock for no-preload-884893: {Name:mk1d1abebd831c3c928fd30ac0d08e20b6c0be1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 01:28:36.622595   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:36.623058   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:36.623083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:36.622994   67979 retry.go:31] will retry after 1.777197126s: waiting for machine to come up
	I0815 01:28:38.401812   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:38.402289   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:38.402319   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:38.402247   67979 retry.go:31] will retry after 3.186960364s: waiting for machine to come up
	I0815 01:28:41.592635   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:41.593067   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | unable to find current IP address of domain old-k8s-version-390782 in network mk-old-k8s-version-390782
	I0815 01:28:41.593093   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | I0815 01:28:41.593018   67979 retry.go:31] will retry after 3.613524245s: waiting for machine to come up
	I0815 01:28:46.469326   67000 start.go:364] duration metric: took 4m10.840663216s to acquireMachinesLock for "embed-certs-190398"
	I0815 01:28:46.469405   67000 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:28:46.469425   67000 fix.go:54] fixHost starting: 
	I0815 01:28:46.469913   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:28:46.469951   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:28:46.486446   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0815 01:28:46.486871   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:28:46.487456   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:28:46.487491   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:28:46.487832   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:28:46.488037   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:28:46.488198   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:28:46.489804   67000 fix.go:112] recreateIfNeeded on embed-certs-190398: state=Stopped err=<nil>
	I0815 01:28:46.489863   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	W0815 01:28:46.490033   67000 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:28:46.492240   67000 out.go:177] * Restarting existing kvm2 VM for "embed-certs-190398" ...
	I0815 01:28:45.209122   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.209617   66919 main.go:141] libmachine: (old-k8s-version-390782) Found IP for machine: 192.168.50.21
	I0815 01:28:45.209639   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserving static IP address...
	I0815 01:28:45.209657   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has current primary IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.210115   66919 main.go:141] libmachine: (old-k8s-version-390782) Reserved static IP address: 192.168.50.21
	I0815 01:28:45.210138   66919 main.go:141] libmachine: (old-k8s-version-390782) Waiting for SSH to be available...
	I0815 01:28:45.210160   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.210188   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | skip adding static IP to network mk-old-k8s-version-390782 - found existing host DHCP lease matching {name: "old-k8s-version-390782", mac: "52:54:00:5c:70:6d", ip: "192.168.50.21"}
	I0815 01:28:45.210204   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Getting to WaitForSSH function...
	I0815 01:28:45.212727   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213127   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.213153   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.213307   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH client type: external
	I0815 01:28:45.213354   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa (-rw-------)
	I0815 01:28:45.213388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:28:45.213406   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | About to run SSH command:
	I0815 01:28:45.213437   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | exit 0
	I0815 01:28:45.340616   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | SSH cmd err, output: <nil>: 
	I0815 01:28:45.341118   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetConfigRaw
	I0815 01:28:45.341848   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.344534   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.344934   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.344967   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.345196   66919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/config.json ...
	I0815 01:28:45.345414   66919 machine.go:94] provisionDockerMachine start ...
	I0815 01:28:45.345433   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:45.345699   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.347935   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348249   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.348278   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.348438   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.348609   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348797   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.348957   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.349117   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.349324   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.349337   66919 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:28:45.456668   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:28:45.456701   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.456959   66919 buildroot.go:166] provisioning hostname "old-k8s-version-390782"
	I0815 01:28:45.456987   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.457148   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.460083   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460425   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.460453   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.460613   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.460783   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.460924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.461039   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.461180   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.461392   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.461416   66919 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-390782 && echo "old-k8s-version-390782" | sudo tee /etc/hostname
	I0815 01:28:45.582108   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-390782
	
	I0815 01:28:45.582136   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.585173   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585556   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.585590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.585795   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.585989   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586131   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.586253   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.586445   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.586648   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.586667   66919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-390782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-390782/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-390782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:28:45.700737   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:28:45.700778   66919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:28:45.700802   66919 buildroot.go:174] setting up certificates
	I0815 01:28:45.700812   66919 provision.go:84] configureAuth start
	I0815 01:28:45.700821   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetMachineName
	I0815 01:28:45.701079   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:45.704006   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704384   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.704416   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.704593   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.706737   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707018   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.707041   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.707213   66919 provision.go:143] copyHostCerts
	I0815 01:28:45.707299   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:28:45.707324   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:28:45.707408   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:28:45.707528   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:28:45.707537   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:28:45.707576   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:28:45.707657   66919 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:28:45.707666   66919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:28:45.707701   66919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:28:45.707771   66919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-390782 san=[127.0.0.1 192.168.50.21 localhost minikube old-k8s-version-390782]
	I0815 01:28:45.787190   66919 provision.go:177] copyRemoteCerts
	I0815 01:28:45.787256   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:28:45.787287   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.790159   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790542   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.790590   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.790735   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.790924   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.791097   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.791217   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:45.874561   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:28:45.897869   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 01:28:45.923862   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:28:45.950038   66919 provision.go:87] duration metric: took 249.211016ms to configureAuth
	I0815 01:28:45.950065   66919 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:28:45.950301   66919 config.go:182] Loaded profile config "old-k8s-version-390782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 01:28:45.950412   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:45.953288   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953746   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:45.953778   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:45.953902   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:45.954098   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954358   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:45.954569   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:45.954784   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:45.954953   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:45.954967   66919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:28:46.228321   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:28:46.228349   66919 machine.go:97] duration metric: took 882.921736ms to provisionDockerMachine
	I0815 01:28:46.228363   66919 start.go:293] postStartSetup for "old-k8s-version-390782" (driver="kvm2")
	I0815 01:28:46.228375   66919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:28:46.228401   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.228739   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:28:46.228774   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.231605   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.231993   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.232020   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.232216   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.232419   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.232698   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.232919   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.319433   66919 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:28:46.323340   66919 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:28:46.323373   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:28:46.323451   66919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:28:46.323555   66919 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:28:46.323658   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:28:46.332594   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:46.354889   66919 start.go:296] duration metric: took 126.511194ms for postStartSetup
	I0815 01:28:46.354930   66919 fix.go:56] duration metric: took 18.581671847s for fixHost
	I0815 01:28:46.354950   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.357987   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358251   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.358277   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.358509   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.358747   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.358934   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.359092   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.359240   66919 main.go:141] libmachine: Using SSH client type: native
	I0815 01:28:46.359425   66919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0815 01:28:46.359438   66919 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:28:46.469167   66919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685326.429908383
	
	I0815 01:28:46.469192   66919 fix.go:216] guest clock: 1723685326.429908383
	I0815 01:28:46.469202   66919 fix.go:229] Guest: 2024-08-15 01:28:46.429908383 +0000 UTC Remote: 2024-08-15 01:28:46.354934297 +0000 UTC m=+256.257437765 (delta=74.974086ms)
	I0815 01:28:46.469231   66919 fix.go:200] guest clock delta is within tolerance: 74.974086ms
	I0815 01:28:46.469236   66919 start.go:83] releasing machines lock for "old-k8s-version-390782", held for 18.696013068s
	I0815 01:28:46.469264   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.469527   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:46.472630   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473053   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.473082   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.473265   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473746   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473931   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .DriverName
	I0815 01:28:46.473998   66919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:28:46.474048   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.474159   66919 ssh_runner.go:195] Run: cat /version.json
	I0815 01:28:46.474188   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHHostname
	I0815 01:28:46.476984   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477012   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477388   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477421   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477445   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:46.477465   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:46.477499   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477615   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHPort
	I0815 01:28:46.477719   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477784   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHKeyPath
	I0815 01:28:46.477845   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477907   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetSSHUsername
	I0815 01:28:46.477975   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.478048   66919 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/old-k8s-version-390782/id_rsa Username:docker}
	I0815 01:28:46.585745   66919 ssh_runner.go:195] Run: systemctl --version
	I0815 01:28:46.592135   66919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:28:46.731888   66919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:28:46.739171   66919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:28:46.739238   66919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:28:46.760211   66919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:28:46.760232   66919 start.go:495] detecting cgroup driver to use...
	I0815 01:28:46.760316   66919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:28:46.778483   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:28:46.791543   66919 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:28:46.791632   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:28:46.804723   66919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:28:46.818794   66919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:28:46.931242   66919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:28:47.091098   66919 docker.go:233] disabling docker service ...
	I0815 01:28:47.091177   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:28:47.105150   66919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:28:47.117485   66919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:28:47.236287   66919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:28:47.376334   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:28:47.389397   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:28:47.406551   66919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 01:28:47.406627   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.416736   66919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:28:47.416803   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.427000   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.437833   66919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:28:47.449454   66919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:28:47.460229   66919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:28:47.469737   66919 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:28:47.469800   66919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:28:47.482270   66919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:28:47.491987   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:47.624462   66919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:28:47.759485   66919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:28:47.759546   66919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:28:47.764492   66919 start.go:563] Will wait 60s for crictl version
	I0815 01:28:47.764545   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:47.767890   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:28:47.814241   66919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:28:47.814342   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.842933   66919 ssh_runner.go:195] Run: crio --version
	I0815 01:28:47.873241   66919 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 01:28:47.874283   66919 main.go:141] libmachine: (old-k8s-version-390782) Calling .GetIP
	I0815 01:28:47.877389   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.877763   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:70:6d", ip: ""} in network mk-old-k8s-version-390782: {Iface:virbr1 ExpiryTime:2024-08-15 02:28:37 +0000 UTC Type:0 Mac:52:54:00:5c:70:6d Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:old-k8s-version-390782 Clientid:01:52:54:00:5c:70:6d}
	I0815 01:28:47.877793   66919 main.go:141] libmachine: (old-k8s-version-390782) DBG | domain old-k8s-version-390782 has defined IP address 192.168.50.21 and MAC address 52:54:00:5c:70:6d in network mk-old-k8s-version-390782
	I0815 01:28:47.878008   66919 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 01:28:47.881794   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:47.893270   66919 kubeadm.go:883] updating cluster {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:28:47.893412   66919 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 01:28:47.893466   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:47.939402   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:47.939489   66919 ssh_runner.go:195] Run: which lz4
	I0815 01:28:47.943142   66919 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:28:47.947165   66919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:28:47.947191   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 01:28:49.418409   66919 crio.go:462] duration metric: took 1.475291539s to copy over tarball
	I0815 01:28:49.418479   66919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:28:46.493529   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Start
	I0815 01:28:46.493725   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring networks are active...
	I0815 01:28:46.494472   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring network default is active
	I0815 01:28:46.494805   67000 main.go:141] libmachine: (embed-certs-190398) Ensuring network mk-embed-certs-190398 is active
	I0815 01:28:46.495206   67000 main.go:141] libmachine: (embed-certs-190398) Getting domain xml...
	I0815 01:28:46.496037   67000 main.go:141] libmachine: (embed-certs-190398) Creating domain...
	I0815 01:28:47.761636   67000 main.go:141] libmachine: (embed-certs-190398) Waiting to get IP...
	I0815 01:28:47.762736   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:47.763100   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:47.763157   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:47.763070   68098 retry.go:31] will retry after 304.161906ms: waiting for machine to come up
	I0815 01:28:48.068645   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.069177   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.069204   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.069148   68098 retry.go:31] will retry after 275.006558ms: waiting for machine to come up
	I0815 01:28:48.345793   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.346294   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.346331   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.346238   68098 retry.go:31] will retry after 325.359348ms: waiting for machine to come up
	I0815 01:28:48.673903   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:48.674489   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:48.674513   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:48.674447   68098 retry.go:31] will retry after 547.495848ms: waiting for machine to come up
	I0815 01:28:49.223465   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:49.224028   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:49.224062   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:49.223982   68098 retry.go:31] will retry after 471.418796ms: waiting for machine to come up
	I0815 01:28:49.696567   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:49.697064   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:49.697093   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:49.697019   68098 retry.go:31] will retry after 871.173809ms: waiting for machine to come up
	I0815 01:28:52.212767   66919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.794261663s)
	I0815 01:28:52.212795   66919 crio.go:469] duration metric: took 2.794358617s to extract the tarball
	I0815 01:28:52.212803   66919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:28:52.254542   66919 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:28:52.286548   66919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 01:28:52.286571   66919 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:28:52.286651   66919 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.286675   66919 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 01:28:52.286687   66919 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.286684   66919 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.286704   66919 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.286645   66919 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.286672   66919 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.286649   66919 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.288433   66919 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.288441   66919 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.288473   66919 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.288446   66919 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:52.288429   66919 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.288423   66919 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.288633   66919 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 01:28:52.526671   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 01:28:52.548397   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.556168   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.560115   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.563338   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.566306   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.576900   66919 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 01:28:52.576955   66919 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 01:28:52.576999   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.579694   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.639727   66919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 01:28:52.639778   66919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.639828   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.697299   66919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 01:28:52.697346   66919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.697397   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.709988   66919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 01:28:52.710026   66919 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 01:28:52.710051   66919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.710072   66919 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.710101   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710109   66919 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 01:28:52.710121   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710128   66919 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.710132   66919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 01:28:52.710146   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.710102   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.710159   66919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.710177   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.710159   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.710198   66919 ssh_runner.go:195] Run: which crictl
	I0815 01:28:52.768699   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.768764   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.768837   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.768892   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.768933   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.768954   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.800404   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:52.893131   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:52.893174   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:52.893241   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:52.918186   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 01:28:52.918203   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:52.918205   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 01:28:52.946507   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 01:28:53.037776   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 01:28:53.037991   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 01:28:53.039379   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 01:28:53.077479   66919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 01:28:53.077542   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 01:28:53.077559   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 01:28:53.096763   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 01:28:53.138129   66919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:28:53.153330   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 01:28:53.153366   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 01:28:53.153368   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 01:28:53.162469   66919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 01:28:53.292377   66919 cache_images.go:92] duration metric: took 1.005786902s to LoadCachedImages
	W0815 01:28:53.292485   66919 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0815 01:28:53.292503   66919 kubeadm.go:934] updating node { 192.168.50.21 8443 v1.20.0 crio true true} ...
	I0815 01:28:53.292682   66919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-390782 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:28:53.292781   66919 ssh_runner.go:195] Run: crio config
	I0815 01:28:53.339927   66919 cni.go:84] Creating CNI manager for ""
	I0815 01:28:53.339957   66919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:28:53.339979   66919 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:28:53.340009   66919 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.21 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-390782 NodeName:old-k8s-version-390782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 01:28:53.340183   66919 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-390782"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:28:53.340278   66919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 01:28:53.350016   66919 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:28:53.350117   66919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:28:53.359379   66919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 01:28:53.375719   66919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:28:53.392054   66919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 01:28:53.409122   66919 ssh_runner.go:195] Run: grep 192.168.50.21	control-plane.minikube.internal$ /etc/hosts
	I0815 01:28:53.412646   66919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:28:53.423917   66919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:28:53.560712   66919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:28:53.576488   66919 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782 for IP: 192.168.50.21
	I0815 01:28:53.576512   66919 certs.go:194] generating shared ca certs ...
	I0815 01:28:53.576530   66919 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:53.576748   66919 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:28:53.576823   66919 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:28:53.576837   66919 certs.go:256] generating profile certs ...
	I0815 01:28:53.576975   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.key
	I0815 01:28:53.577044   66919 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key.d79afed6
	I0815 01:28:53.577113   66919 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key
	I0815 01:28:53.577274   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:28:53.577323   66919 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:28:53.577337   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:28:53.577369   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:28:53.577400   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:28:53.577431   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:28:53.577529   66919 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:28:53.578239   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:28:53.622068   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:28:53.648947   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:28:53.681678   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:28:53.719636   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 01:28:53.744500   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:28:53.777941   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:28:53.810631   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:28:53.832906   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:28:53.854487   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:28:53.876448   66919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:28:53.898487   66919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:28:53.914102   66919 ssh_runner.go:195] Run: openssl version
	I0815 01:28:53.919563   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:28:53.929520   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933730   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.933775   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:28:53.939056   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:28:53.948749   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:28:53.958451   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962624   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.962669   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:28:53.967800   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:28:53.977228   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:28:53.986801   66919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990797   66919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.990842   66919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:28:53.995930   66919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:28:54.005862   66919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:28:54.010115   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:28:54.015861   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:28:54.021980   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:28:54.028344   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:28:54.034172   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:28:54.040316   66919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:28:54.046525   66919 kubeadm.go:392] StartCluster: {Name:old-k8s-version-390782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-390782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:28:54.046624   66919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:28:54.046671   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.086420   66919 cri.go:89] found id: ""
	I0815 01:28:54.086498   66919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:28:54.096425   66919 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:28:54.096449   66919 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:28:54.096500   66919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:28:54.106217   66919 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:28:54.107254   66919 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-390782" does not appear in /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:28:54.107872   66919 kubeconfig.go:62] /home/jenkins/minikube-integration/19443-13088/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-390782" cluster setting kubeconfig missing "old-k8s-version-390782" context setting]
	I0815 01:28:54.109790   66919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:28:54.140029   66919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:28:54.150180   66919 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.21
	I0815 01:28:54.150237   66919 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:28:54.150251   66919 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:28:54.150308   66919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:28:54.186400   66919 cri.go:89] found id: ""
	I0815 01:28:54.186485   66919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:28:54.203351   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:28:54.212828   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:28:54.212849   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:28:54.212910   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:28:54.221577   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:28:54.221641   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:28:54.230730   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:28:54.239213   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:28:54.239279   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:28:54.248268   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.256909   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:28:54.256968   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:28:54.266043   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:28:54.276366   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:28:54.276432   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:28:54.285945   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:28:54.295262   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:54.419237   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.098102   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:50.569917   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:50.570436   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:50.570465   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:50.570394   68098 retry.go:31] will retry after 775.734951ms: waiting for machine to come up
	I0815 01:28:51.347459   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:51.347917   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:51.347944   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:51.347869   68098 retry.go:31] will retry after 1.319265032s: waiting for machine to come up
	I0815 01:28:52.668564   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:52.669049   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:52.669116   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:52.669015   68098 retry.go:31] will retry after 1.765224181s: waiting for machine to come up
	I0815 01:28:54.435556   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:54.436039   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:54.436071   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:54.435975   68098 retry.go:31] will retry after 1.545076635s: waiting for machine to come up
	I0815 01:28:55.318597   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.420419   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:28:55.514727   66919 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:28:55.514825   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.015883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:56.515816   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.015709   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:57.515895   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.015127   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:58.515796   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.014975   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:59.515893   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:00.015918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:28:55.982693   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:55.983288   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:55.983328   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:55.983112   68098 retry.go:31] will retry after 2.788039245s: waiting for machine to come up
	I0815 01:28:58.773761   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:28:58.774166   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:28:58.774194   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:28:58.774087   68098 retry.go:31] will retry after 2.531335813s: waiting for machine to come up
	I0815 01:29:00.514933   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.015014   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.515780   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.015534   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:02.515502   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.015539   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:03.515643   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.015544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:04.515786   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:05.015882   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:01.309051   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:01.309593   67000 main.go:141] libmachine: (embed-certs-190398) DBG | unable to find current IP address of domain embed-certs-190398 in network mk-embed-certs-190398
	I0815 01:29:01.309634   67000 main.go:141] libmachine: (embed-certs-190398) DBG | I0815 01:29:01.309552   68098 retry.go:31] will retry after 3.239280403s: waiting for machine to come up
	I0815 01:29:04.552370   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.552978   67000 main.go:141] libmachine: (embed-certs-190398) Found IP for machine: 192.168.72.151
	I0815 01:29:04.553002   67000 main.go:141] libmachine: (embed-certs-190398) Reserving static IP address...
	I0815 01:29:04.553047   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has current primary IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.553427   67000 main.go:141] libmachine: (embed-certs-190398) Reserved static IP address: 192.168.72.151
	I0815 01:29:04.553452   67000 main.go:141] libmachine: (embed-certs-190398) Waiting for SSH to be available...
	I0815 01:29:04.553481   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "embed-certs-190398", mac: "52:54:00:5a:91:1a", ip: "192.168.72.151"} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.553510   67000 main.go:141] libmachine: (embed-certs-190398) DBG | skip adding static IP to network mk-embed-certs-190398 - found existing host DHCP lease matching {name: "embed-certs-190398", mac: "52:54:00:5a:91:1a", ip: "192.168.72.151"}
	I0815 01:29:04.553525   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Getting to WaitForSSH function...
	I0815 01:29:04.555694   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.556036   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.556067   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.556168   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Using SSH client type: external
	I0815 01:29:04.556189   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa (-rw-------)
	I0815 01:29:04.556221   67000 main.go:141] libmachine: (embed-certs-190398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:04.556235   67000 main.go:141] libmachine: (embed-certs-190398) DBG | About to run SSH command:
	I0815 01:29:04.556252   67000 main.go:141] libmachine: (embed-certs-190398) DBG | exit 0
	I0815 01:29:04.680599   67000 main.go:141] libmachine: (embed-certs-190398) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:04.680961   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetConfigRaw
	I0815 01:29:04.681526   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:04.683847   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.684244   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.684270   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.684531   67000 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/config.json ...
	I0815 01:29:04.684755   67000 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:04.684772   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:04.684989   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.687469   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.687823   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.687848   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.687972   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.688135   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.688267   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.688389   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.688525   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.688749   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.688761   67000 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:04.788626   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:04.788670   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:04.788914   67000 buildroot.go:166] provisioning hostname "embed-certs-190398"
	I0815 01:29:04.788940   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:04.789136   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.791721   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.792153   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.792198   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.792398   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.792580   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.792756   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.792861   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.793053   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.793293   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.793312   67000 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-190398 && echo "embed-certs-190398" | sudo tee /etc/hostname
	I0815 01:29:04.910133   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-190398
	
	I0815 01:29:04.910160   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:04.913241   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.913666   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:04.913701   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:04.913887   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:04.914131   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.914336   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:04.914491   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:04.914665   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:04.914884   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:04.914909   67000 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-190398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-190398/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-190398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:05.025052   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:05.025089   67000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:05.025115   67000 buildroot.go:174] setting up certificates
	I0815 01:29:05.025127   67000 provision.go:84] configureAuth start
	I0815 01:29:05.025139   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetMachineName
	I0815 01:29:05.025439   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:05.028224   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.028582   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.028618   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.028753   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.030960   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.031305   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.031335   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.031524   67000 provision.go:143] copyHostCerts
	I0815 01:29:05.031598   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:05.031608   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:05.031663   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:05.031745   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:05.031752   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:05.031773   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:05.031825   67000 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:05.031832   67000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:05.031849   67000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:05.031909   67000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.embed-certs-190398 san=[127.0.0.1 192.168.72.151 embed-certs-190398 localhost minikube]
	I0815 01:29:05.246512   67000 provision.go:177] copyRemoteCerts
	I0815 01:29:05.246567   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:05.246590   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.249286   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.249570   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.249609   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.249736   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.249933   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.250109   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.250337   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.330596   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 01:29:05.352611   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:29:05.374001   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:05.394724   67000 provision.go:87] duration metric: took 369.584008ms to configureAuth
	I0815 01:29:05.394750   67000 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:05.394917   67000 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:05.394982   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.397305   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.397620   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.397658   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.397748   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.397924   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.398039   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.398150   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.398297   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:05.398465   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:05.398486   67000 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:05.893255   67451 start.go:364] duration metric: took 3m0.662991861s to acquireMachinesLock for "default-k8s-diff-port-018537"
	I0815 01:29:05.893347   67451 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:29:05.893356   67451 fix.go:54] fixHost starting: 
	I0815 01:29:05.893803   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:05.893846   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:05.910516   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I0815 01:29:05.910882   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:05.911391   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:05.911415   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:05.911748   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:05.911959   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:05.912088   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:05.913672   67451 fix.go:112] recreateIfNeeded on default-k8s-diff-port-018537: state=Stopped err=<nil>
	I0815 01:29:05.913699   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	W0815 01:29:05.913861   67451 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:29:05.915795   67451 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-018537" ...
	I0815 01:29:05.666194   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:05.666225   67000 machine.go:97] duration metric: took 981.45738ms to provisionDockerMachine
	I0815 01:29:05.666241   67000 start.go:293] postStartSetup for "embed-certs-190398" (driver="kvm2")
	I0815 01:29:05.666253   67000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:05.666275   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.666640   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:05.666671   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.669648   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.670098   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.670124   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.670300   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.670507   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.670677   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.670835   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.750950   67000 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:05.755040   67000 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:05.755066   67000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:05.755139   67000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:05.755244   67000 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:05.755366   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:05.764271   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:05.786563   67000 start.go:296] duration metric: took 120.295403ms for postStartSetup
	I0815 01:29:05.786609   67000 fix.go:56] duration metric: took 19.317192467s for fixHost
	I0815 01:29:05.786634   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.789273   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.789677   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.789708   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.789886   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.790082   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.790244   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.790371   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.790654   67000 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:05.790815   67000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.151 22 <nil> <nil>}
	I0815 01:29:05.790826   67000 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:05.893102   67000 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685345.869278337
	
	I0815 01:29:05.893123   67000 fix.go:216] guest clock: 1723685345.869278337
	I0815 01:29:05.893131   67000 fix.go:229] Guest: 2024-08-15 01:29:05.869278337 +0000 UTC Remote: 2024-08-15 01:29:05.786613294 +0000 UTC m=+270.290281945 (delta=82.665043ms)
	I0815 01:29:05.893159   67000 fix.go:200] guest clock delta is within tolerance: 82.665043ms
	I0815 01:29:05.893165   67000 start.go:83] releasing machines lock for "embed-certs-190398", held for 19.423784798s
	I0815 01:29:05.893192   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.893484   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:05.896152   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.896528   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.896555   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.896735   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897183   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897392   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:29:05.897480   67000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:05.897536   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.897681   67000 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:05.897704   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:29:05.900443   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900543   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900814   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.900845   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.900873   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:05.900891   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:05.901123   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.901150   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:29:05.901342   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.901346   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:29:05.901531   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.901531   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:29:05.901708   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:05.901709   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:29:06.008891   67000 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:06.014975   67000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:06.158062   67000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:06.164485   67000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:06.164550   67000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:06.180230   67000 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:06.180250   67000 start.go:495] detecting cgroup driver to use...
	I0815 01:29:06.180301   67000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:06.197927   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:06.210821   67000 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:06.210885   67000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:06.225614   67000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:06.239266   67000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:06.357793   67000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:06.511990   67000 docker.go:233] disabling docker service ...
	I0815 01:29:06.512061   67000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:06.529606   67000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:06.547241   67000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:06.689512   67000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:06.807041   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:06.820312   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:06.837948   67000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:06.838011   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.848233   67000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:06.848311   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.858132   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.868009   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.879629   67000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:06.893713   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.907444   67000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.928032   67000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:06.943650   67000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:06.957750   67000 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:06.957805   67000 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:06.972288   67000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:06.982187   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:07.154389   67000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:07.287847   67000 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:07.287933   67000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:07.292283   67000 start.go:563] Will wait 60s for crictl version
	I0815 01:29:07.292342   67000 ssh_runner.go:195] Run: which crictl
	I0815 01:29:07.295813   67000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:07.332788   67000 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:07.332889   67000 ssh_runner.go:195] Run: crio --version
	I0815 01:29:07.359063   67000 ssh_runner.go:195] Run: crio --version
	I0815 01:29:07.387496   67000 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:05.917276   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Start
	I0815 01:29:05.917498   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring networks are active...
	I0815 01:29:05.918269   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring network default is active
	I0815 01:29:05.918599   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Ensuring network mk-default-k8s-diff-port-018537 is active
	I0815 01:29:05.919147   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Getting domain xml...
	I0815 01:29:05.919829   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Creating domain...
	I0815 01:29:07.208213   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting to get IP...
	I0815 01:29:07.209456   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.209848   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.209933   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.209843   68264 retry.go:31] will retry after 254.654585ms: waiting for machine to come up
	I0815 01:29:07.466248   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.466679   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.466708   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.466644   68264 retry.go:31] will retry after 285.54264ms: waiting for machine to come up
	I0815 01:29:07.754037   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.754537   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:07.754578   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:07.754511   68264 retry.go:31] will retry after 336.150506ms: waiting for machine to come up
	I0815 01:29:08.091923   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.092402   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.092444   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:08.092368   68264 retry.go:31] will retry after 591.285134ms: waiting for machine to come up
	I0815 01:29:08.685380   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.685707   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:08.685735   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:08.685690   68264 retry.go:31] will retry after 701.709425ms: waiting for machine to come up
	I0815 01:29:09.388574   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:09.389026   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:09.389053   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:09.388979   68264 retry.go:31] will retry after 916.264423ms: waiting for machine to come up
	I0815 01:29:05.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.015647   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:06.514952   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.014969   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.515614   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.015757   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:08.515184   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.014931   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:09.515381   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.015761   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:07.389220   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetIP
	I0815 01:29:07.392416   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:07.392842   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:29:07.392868   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:29:07.393095   67000 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:07.396984   67000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:07.410153   67000 kubeadm.go:883] updating cluster {Name:embed-certs-190398 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:07.410275   67000 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:07.410348   67000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:07.447193   67000 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:07.447255   67000 ssh_runner.go:195] Run: which lz4
	I0815 01:29:07.451046   67000 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:29:07.454808   67000 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:29:07.454836   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:29:08.696070   67000 crio.go:462] duration metric: took 1.245060733s to copy over tarball
	I0815 01:29:08.696174   67000 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:29:10.306552   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:10.306969   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:10.307001   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:10.306912   68264 retry.go:31] will retry after 1.186920529s: waiting for machine to come up
	I0815 01:29:11.494832   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:11.495288   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:11.495324   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:11.495213   68264 retry.go:31] will retry after 1.049148689s: waiting for machine to come up
	I0815 01:29:12.546492   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:12.546872   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:12.546898   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:12.546844   68264 retry.go:31] will retry after 1.689384408s: waiting for machine to come up
	I0815 01:29:14.237471   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:14.238081   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:14.238134   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:14.238011   68264 retry.go:31] will retry after 1.557759414s: waiting for machine to come up
	I0815 01:29:10.515131   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:11.515740   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.015002   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:12.515169   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.015676   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.515330   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.015193   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.515742   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.015837   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:10.809989   67000 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.113786525s)
	I0815 01:29:10.810014   67000 crio.go:469] duration metric: took 2.113915636s to extract the tarball
	I0815 01:29:10.810021   67000 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:29:10.845484   67000 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:10.886403   67000 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:29:10.886424   67000 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:29:10.886433   67000 kubeadm.go:934] updating node { 192.168.72.151 8443 v1.31.0 crio true true} ...
	I0815 01:29:10.886550   67000 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-190398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:29:10.886646   67000 ssh_runner.go:195] Run: crio config
	I0815 01:29:10.933915   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:29:10.933946   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:10.933963   67000 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:29:10.933985   67000 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.151 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-190398 NodeName:embed-certs-190398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:29:10.934114   67000 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-190398"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:29:10.934179   67000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:29:10.943778   67000 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:29:10.943839   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:29:10.952852   67000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0815 01:29:10.968026   67000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:29:10.982813   67000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0815 01:29:10.998314   67000 ssh_runner.go:195] Run: grep 192.168.72.151	control-plane.minikube.internal$ /etc/hosts
	I0815 01:29:11.001818   67000 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:11.012933   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:11.147060   67000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:11.170825   67000 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398 for IP: 192.168.72.151
	I0815 01:29:11.170850   67000 certs.go:194] generating shared ca certs ...
	I0815 01:29:11.170871   67000 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:11.171064   67000 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:29:11.171131   67000 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:29:11.171146   67000 certs.go:256] generating profile certs ...
	I0815 01:29:11.171251   67000 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/client.key
	I0815 01:29:11.171359   67000 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.key.7cdd5698
	I0815 01:29:11.171414   67000 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.key
	I0815 01:29:11.171556   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:29:11.171593   67000 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:29:11.171602   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:29:11.171624   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:29:11.171647   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:29:11.171676   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:29:11.171730   67000 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:11.172346   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:29:11.208182   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:29:11.236641   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:29:11.277018   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:29:11.304926   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 01:29:11.335397   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:29:11.358309   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:29:11.380632   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/embed-certs-190398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:29:11.403736   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:29:11.425086   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:29:11.448037   67000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:29:11.470461   67000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:29:11.486415   67000 ssh_runner.go:195] Run: openssl version
	I0815 01:29:11.492028   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:29:11.502925   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.507270   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.507323   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:29:11.513051   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:29:11.523911   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:29:11.534614   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.538753   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.538813   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:29:11.544194   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:29:11.554387   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:29:11.564690   67000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.568810   67000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.568873   67000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:11.575936   67000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:29:11.589152   67000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:29:11.594614   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:29:11.601880   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:29:11.609471   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:29:11.617010   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:29:11.623776   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:29:11.629262   67000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:29:11.634708   67000 kubeadm.go:392] StartCluster: {Name:embed-certs-190398 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-190398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:29:11.634821   67000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:29:11.634890   67000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:11.676483   67000 cri.go:89] found id: ""
	I0815 01:29:11.676559   67000 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:29:11.686422   67000 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:29:11.686445   67000 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:29:11.686494   67000 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:29:11.695319   67000 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:29:11.696472   67000 kubeconfig.go:125] found "embed-certs-190398" server: "https://192.168.72.151:8443"
	I0815 01:29:11.699906   67000 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:29:11.709090   67000 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.151
	I0815 01:29:11.709119   67000 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:29:11.709145   67000 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:29:11.709211   67000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:11.742710   67000 cri.go:89] found id: ""
	I0815 01:29:11.742786   67000 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:29:11.758986   67000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:29:11.768078   67000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:29:11.768100   67000 kubeadm.go:157] found existing configuration files:
	
	I0815 01:29:11.768150   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:29:11.776638   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:29:11.776724   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:29:11.785055   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:29:11.793075   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:29:11.793127   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:29:11.801516   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:29:11.809527   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:29:11.809572   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:29:11.817855   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:29:11.826084   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:29:11.826157   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:29:11.835699   67000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:29:11.844943   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:11.961226   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.030548   67000 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069293244s)
	I0815 01:29:13.030577   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.218385   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.302667   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:13.397530   67000 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:29:13.397630   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:13.898538   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.398613   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:14.897833   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.397759   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.798041   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:15.798467   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:15.798512   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:15.798446   68264 retry.go:31] will retry after 2.538040218s: waiting for machine to come up
	I0815 01:29:18.338522   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:18.338961   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:18.338988   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:18.338910   68264 retry.go:31] will retry after 3.121146217s: waiting for machine to come up
	I0815 01:29:15.515901   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.015290   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:16.514956   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.015924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:17.515782   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.014890   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:18.515482   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:19.515830   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.015304   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.897957   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:15.910962   67000 api_server.go:72] duration metric: took 2.513430323s to wait for apiserver process to appear ...
	I0815 01:29:15.910999   67000 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:29:15.911033   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.650453   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:18.650485   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:18.650498   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.686925   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:18.686951   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:18.911228   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:18.915391   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:18.915424   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:19.412000   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:19.419523   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:19.419562   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:19.911102   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:19.918074   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:19.918110   67000 api_server.go:103] status: https://192.168.72.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:20.411662   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:29:20.417395   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0815 01:29:20.423058   67000 api_server.go:141] control plane version: v1.31.0
	I0815 01:29:20.423081   67000 api_server.go:131] duration metric: took 4.512072378s to wait for apiserver health ...
	I0815 01:29:20.423089   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:29:20.423095   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:20.424876   67000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:29:20.426131   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:29:20.450961   67000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:29:20.474210   67000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:29:20.486417   67000 system_pods.go:59] 8 kube-system pods found
	I0815 01:29:20.486452   67000 system_pods.go:61] "coredns-6f6b679f8f-kgklr" [5e07a5eb-5ff5-4c1c-9fc7-0a266389c235] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:29:20.486463   67000 system_pods.go:61] "etcd-embed-certs-190398" [11567f44-26c0-4cdc-81f4-d7f88eb415e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:29:20.486480   67000 system_pods.go:61] "kube-apiserver-embed-certs-190398" [da9ce1f1-705f-4b23-ace7-794d277e5d44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:29:20.486495   67000 system_pods.go:61] "kube-controller-manager-embed-certs-190398" [0a4c8153-f94c-4d24-9d2f-38e3eebd8649] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:29:20.486509   67000 system_pods.go:61] "kube-proxy-bmddn" [50e8d666-29d5-45b6-82a7-608402dfb7b1] Running
	I0815 01:29:20.486515   67000 system_pods.go:61] "kube-scheduler-embed-certs-190398" [483d04a2-16c4-4c0d-81e2-dbdfa2141981] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:29:20.486520   67000 system_pods.go:61] "metrics-server-6867b74b74-sfnng" [c2088569-2e49-4ccd-bd7c-bcd454e75b1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:29:20.486528   67000 system_pods.go:61] "storage-provisioner" [ad082138-0c63-43a5-8052-5a7126a6ec77] Running
	I0815 01:29:20.486534   67000 system_pods.go:74] duration metric: took 12.306432ms to wait for pod list to return data ...
	I0815 01:29:20.486546   67000 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:29:20.489727   67000 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:29:20.489751   67000 node_conditions.go:123] node cpu capacity is 2
	I0815 01:29:20.489763   67000 node_conditions.go:105] duration metric: took 3.21035ms to run NodePressure ...
	I0815 01:29:20.489782   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:21.461547   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:21.462048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | unable to find current IP address of domain default-k8s-diff-port-018537 in network mk-default-k8s-diff-port-018537
	I0815 01:29:21.462083   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | I0815 01:29:21.462013   68264 retry.go:31] will retry after 4.52196822s: waiting for machine to come up
	I0815 01:29:20.515183   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.015283   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:21.515686   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.015404   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:22.515935   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.015577   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:23.515114   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.015146   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:24.515849   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:25.014883   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:20.750707   67000 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:29:20.766067   67000 kubeadm.go:739] kubelet initialised
	I0815 01:29:20.766089   67000 kubeadm.go:740] duration metric: took 15.355118ms waiting for restarted kubelet to initialise ...
	I0815 01:29:20.766099   67000 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:20.771715   67000 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.778596   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.778617   67000 pod_ready.go:81] duration metric: took 6.879509ms for pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.778630   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "coredns-6f6b679f8f-kgklr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.778638   67000 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.783422   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "etcd-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.783450   67000 pod_ready.go:81] duration metric: took 4.801812ms for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.783461   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "etcd-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.783473   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:20.788877   67000 pod_ready.go:97] node "embed-certs-190398" hosting pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.788896   67000 pod_ready.go:81] duration metric: took 5.41319ms for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:20.788904   67000 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-190398" hosting pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-190398" has status "Ready":"False"
	I0815 01:29:20.788909   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:22.795340   67000 pod_ready.go:102] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:25.296907   67000 pod_ready.go:102] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:27.201181   66492 start.go:364] duration metric: took 54.426048174s to acquireMachinesLock for "no-preload-884893"
	I0815 01:29:27.201235   66492 start.go:96] Skipping create...Using existing machine configuration
	I0815 01:29:27.201317   66492 fix.go:54] fixHost starting: 
	I0815 01:29:27.201776   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:27.201818   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:27.218816   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0815 01:29:27.219223   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:27.219731   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:29:27.219754   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:27.220146   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:27.220342   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:27.220507   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:29:27.221962   66492 fix.go:112] recreateIfNeeded on no-preload-884893: state=Stopped err=<nil>
	I0815 01:29:27.221988   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	W0815 01:29:27.222177   66492 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 01:29:27.224523   66492 out.go:177] * Restarting existing kvm2 VM for "no-preload-884893" ...
	I0815 01:29:25.986027   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.986585   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Found IP for machine: 192.168.39.223
	I0815 01:29:25.986616   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has current primary IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.986629   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Reserving static IP address...
	I0815 01:29:25.987034   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-018537", mac: "52:54:00:ec:53:52", ip: "192.168.39.223"} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:25.987066   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | skip adding static IP to network mk-default-k8s-diff-port-018537 - found existing host DHCP lease matching {name: "default-k8s-diff-port-018537", mac: "52:54:00:ec:53:52", ip: "192.168.39.223"}
	I0815 01:29:25.987085   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Reserved static IP address: 192.168.39.223
	I0815 01:29:25.987108   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Waiting for SSH to be available...
	I0815 01:29:25.987124   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Getting to WaitForSSH function...
	I0815 01:29:25.989426   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.989800   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:25.989831   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:25.989937   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Using SSH client type: external
	I0815 01:29:25.989962   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa (-rw-------)
	I0815 01:29:25.990011   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:25.990026   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | About to run SSH command:
	I0815 01:29:25.990048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | exit 0
	I0815 01:29:26.121218   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:26.121655   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetConfigRaw
	I0815 01:29:26.122265   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:26.125083   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.125483   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.125513   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.125757   67451 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/config.json ...
	I0815 01:29:26.125978   67451 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:26.126004   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:26.126235   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.128419   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.128787   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.128814   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.128963   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.129124   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.129274   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.129420   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.129603   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.129828   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.129843   67451 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:26.236866   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:26.236900   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.237136   67451 buildroot.go:166] provisioning hostname "default-k8s-diff-port-018537"
	I0815 01:29:26.237158   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.237334   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.240243   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.240760   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.240791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.240959   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.241203   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.241415   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.241581   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.241741   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.241903   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.241916   67451 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-018537 && echo "default-k8s-diff-port-018537" | sudo tee /etc/hostname
	I0815 01:29:26.358127   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-018537
	
	I0815 01:29:26.358159   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.361276   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.361664   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.361694   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.361841   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.362013   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.362191   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.362368   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.362517   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.362704   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.362729   67451 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-018537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-018537/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-018537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:26.479326   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:26.479357   67451 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:26.479398   67451 buildroot.go:174] setting up certificates
	I0815 01:29:26.479411   67451 provision.go:84] configureAuth start
	I0815 01:29:26.479440   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetMachineName
	I0815 01:29:26.479791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:26.482464   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.482845   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.482873   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.483023   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.485502   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.485960   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.485995   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.486135   67451 provision.go:143] copyHostCerts
	I0815 01:29:26.486194   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:26.486214   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:26.486273   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:26.486384   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:26.486394   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:26.486419   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:26.486480   67451 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:26.486487   67451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:26.486508   67451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:26.486573   67451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-018537 san=[127.0.0.1 192.168.39.223 default-k8s-diff-port-018537 localhost minikube]
	I0815 01:29:26.563251   67451 provision.go:177] copyRemoteCerts
	I0815 01:29:26.563309   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:26.563337   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.566141   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.566481   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.566506   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.566737   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.566947   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.567087   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.567208   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:26.650593   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:26.673166   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0815 01:29:26.695563   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 01:29:26.717169   67451 provision.go:87] duration metric: took 237.742408ms to configureAuth
	I0815 01:29:26.717198   67451 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:26.717373   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:26.717453   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.720247   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.720620   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.720648   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.720815   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.721007   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.721176   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.721302   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.721484   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:26.721663   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:26.721681   67451 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:26.972647   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:26.972691   67451 machine.go:97] duration metric: took 846.694776ms to provisionDockerMachine
	I0815 01:29:26.972706   67451 start.go:293] postStartSetup for "default-k8s-diff-port-018537" (driver="kvm2")
	I0815 01:29:26.972716   67451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:26.972731   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:26.973032   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:26.973053   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:26.975828   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.976300   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:26.976334   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:26.976531   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:26.976827   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:26.976999   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:26.977111   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.059130   67451 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:27.062867   67451 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:27.062893   67451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:27.062954   67451 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:27.063024   67451 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:27.063119   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:27.072111   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:27.093976   67451 start.go:296] duration metric: took 121.256938ms for postStartSetup
	I0815 01:29:27.094023   67451 fix.go:56] duration metric: took 21.200666941s for fixHost
	I0815 01:29:27.094048   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.096548   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.096881   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.096912   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.097059   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.097238   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.097400   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.097511   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.097664   67451 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:27.097842   67451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I0815 01:29:27.097858   67451 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:27.201028   67451 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685367.180566854
	
	I0815 01:29:27.201053   67451 fix.go:216] guest clock: 1723685367.180566854
	I0815 01:29:27.201062   67451 fix.go:229] Guest: 2024-08-15 01:29:27.180566854 +0000 UTC Remote: 2024-08-15 01:29:27.094027897 +0000 UTC m=+201.997769057 (delta=86.538957ms)
	I0815 01:29:27.201100   67451 fix.go:200] guest clock delta is within tolerance: 86.538957ms
	I0815 01:29:27.201107   67451 start.go:83] releasing machines lock for "default-k8s-diff-port-018537", held for 21.307794339s
	I0815 01:29:27.201135   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.201522   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:27.204278   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.204674   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.204703   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.204934   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205501   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205713   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:27.205800   67451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:27.205849   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.206127   67451 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:27.206149   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:27.208688   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.208858   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209066   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.209092   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209394   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.209551   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.209552   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:27.209584   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:27.209741   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:27.209748   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.209952   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:27.210001   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.210090   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:27.210256   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:27.293417   67451 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:27.329491   67451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:27.473782   67451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:27.480357   67451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:27.480432   67451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:27.499552   67451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:27.499582   67451 start.go:495] detecting cgroup driver to use...
	I0815 01:29:27.499650   67451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:27.515626   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:27.534025   67451 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:27.534098   67451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:27.547536   67451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:27.561135   67451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:27.672622   67451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:27.832133   67451 docker.go:233] disabling docker service ...
	I0815 01:29:27.832210   67451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:27.845647   67451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:27.858233   67451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:27.985504   67451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:28.119036   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:28.133844   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:28.151116   67451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:28.151188   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.162173   67451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:28.162250   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.171954   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.182363   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.192943   67451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:28.203684   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.214360   67451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.230572   67451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:28.241283   67451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:28.250743   67451 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:28.250804   67451 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:28.263655   67451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:28.273663   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:28.408232   67451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:28.558860   67451 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:28.558933   67451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:28.564390   67451 start.go:563] Will wait 60s for crictl version
	I0815 01:29:28.564508   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:29:28.568351   67451 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:28.616006   67451 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:28.616094   67451 ssh_runner.go:195] Run: crio --version
	I0815 01:29:28.642621   67451 ssh_runner.go:195] Run: crio --version
	I0815 01:29:28.671150   67451 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:28.672626   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetIP
	I0815 01:29:28.675626   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:28.676004   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:28.676038   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:28.676296   67451 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:28.680836   67451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:28.694402   67451 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:28.694519   67451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:28.694574   67451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:28.730337   67451 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:28.730401   67451 ssh_runner.go:195] Run: which lz4
	I0815 01:29:28.734226   67451 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0815 01:29:28.738162   67451 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 01:29:28.738185   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 01:29:30.016492   67451 crio.go:462] duration metric: took 1.282301387s to copy over tarball
	I0815 01:29:30.016571   67451 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 01:29:25.515881   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.015741   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:26.515122   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.014889   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.515108   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.015604   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:28.515658   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.015319   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:29.515225   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.015561   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:27.225775   66492 main.go:141] libmachine: (no-preload-884893) Calling .Start
	I0815 01:29:27.225974   66492 main.go:141] libmachine: (no-preload-884893) Ensuring networks are active...
	I0815 01:29:27.226702   66492 main.go:141] libmachine: (no-preload-884893) Ensuring network default is active
	I0815 01:29:27.227078   66492 main.go:141] libmachine: (no-preload-884893) Ensuring network mk-no-preload-884893 is active
	I0815 01:29:27.227577   66492 main.go:141] libmachine: (no-preload-884893) Getting domain xml...
	I0815 01:29:27.228376   66492 main.go:141] libmachine: (no-preload-884893) Creating domain...
	I0815 01:29:28.609215   66492 main.go:141] libmachine: (no-preload-884893) Waiting to get IP...
	I0815 01:29:28.610043   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:28.610440   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:28.610487   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:28.610415   68431 retry.go:31] will retry after 305.851347ms: waiting for machine to come up
	I0815 01:29:28.918245   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:28.918747   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:28.918770   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:28.918720   68431 retry.go:31] will retry after 368.045549ms: waiting for machine to come up
	I0815 01:29:29.288313   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:29.289013   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:29.289046   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:29.288958   68431 retry.go:31] will retry after 415.68441ms: waiting for machine to come up
	I0815 01:29:29.706767   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:29.707226   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:29.707249   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:29.707180   68431 retry.go:31] will retry after 575.538038ms: waiting for machine to come up
	I0815 01:29:26.795064   67000 pod_ready.go:92] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:26.795085   67000 pod_ready.go:81] duration metric: took 6.006168181s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.795096   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bmddn" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.799159   67000 pod_ready.go:92] pod "kube-proxy-bmddn" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:26.799176   67000 pod_ready.go:81] duration metric: took 4.074526ms for pod "kube-proxy-bmddn" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:26.799184   67000 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:28.805591   67000 pod_ready.go:102] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:30.306235   67000 pod_ready.go:92] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:30.306262   67000 pod_ready.go:81] duration metric: took 3.507070811s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:30.306273   67000 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:32.131219   67451 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.114619197s)
	I0815 01:29:32.131242   67451 crio.go:469] duration metric: took 2.114723577s to extract the tarball
	I0815 01:29:32.131249   67451 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 01:29:32.169830   67451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:32.217116   67451 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 01:29:32.217139   67451 cache_images.go:84] Images are preloaded, skipping loading
	I0815 01:29:32.217146   67451 kubeadm.go:934] updating node { 192.168.39.223 8444 v1.31.0 crio true true} ...
	I0815 01:29:32.217245   67451 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-018537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:29:32.217305   67451 ssh_runner.go:195] Run: crio config
	I0815 01:29:32.272237   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:29:32.272257   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:32.272270   67451 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:29:32.272292   67451 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.223 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-018537 NodeName:default-k8s-diff-port-018537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:29:32.272435   67451 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.223
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-018537"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:29:32.272486   67451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:29:32.282454   67451 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:29:32.282510   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:29:32.291448   67451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0815 01:29:32.307026   67451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:29:32.324183   67451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0815 01:29:32.339298   67451 ssh_runner.go:195] Run: grep 192.168.39.223	control-plane.minikube.internal$ /etc/hosts
	I0815 01:29:32.342644   67451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:32.353518   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:32.468014   67451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:32.484049   67451 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537 for IP: 192.168.39.223
	I0815 01:29:32.484075   67451 certs.go:194] generating shared ca certs ...
	I0815 01:29:32.484097   67451 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:32.484263   67451 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:29:32.484313   67451 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:29:32.484326   67451 certs.go:256] generating profile certs ...
	I0815 01:29:32.484436   67451 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/client.key
	I0815 01:29:32.484511   67451 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.key.141a85fa
	I0815 01:29:32.484564   67451 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.key
	I0815 01:29:32.484747   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:29:32.484787   67451 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:29:32.484797   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:29:32.484828   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:29:32.484869   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:29:32.484896   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:29:32.484953   67451 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:32.485741   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:29:32.521657   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:29:32.556226   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:29:32.585724   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:29:32.619588   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 01:29:32.649821   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:29:32.677343   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:29:32.699622   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/default-k8s-diff-port-018537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 01:29:32.721142   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:29:32.742388   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:29:32.766476   67451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:29:32.788341   67451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:29:32.803728   67451 ssh_runner.go:195] Run: openssl version
	I0815 01:29:32.809178   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:29:32.819091   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.823068   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.823119   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:29:32.828361   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:29:32.837721   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:29:32.847217   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.851176   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.851220   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:29:32.856303   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:29:32.865672   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:29:32.875695   67451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.879910   67451 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.879961   67451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:29:32.885240   67451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:29:32.894951   67451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:29:32.899131   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:29:32.904465   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:29:32.910243   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:29:32.915874   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:29:32.921193   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:29:32.926569   67451 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:29:32.931905   67451 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-018537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-018537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:29:32.932015   67451 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:29:32.932095   67451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:32.967184   67451 cri.go:89] found id: ""
	I0815 01:29:32.967270   67451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:29:32.977083   67451 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:29:32.977105   67451 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:29:32.977146   67451 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:29:32.986934   67451 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:29:32.988393   67451 kubeconfig.go:125] found "default-k8s-diff-port-018537" server: "https://192.168.39.223:8444"
	I0815 01:29:32.991478   67451 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:29:33.000175   67451 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.223
	I0815 01:29:33.000201   67451 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:29:33.000211   67451 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:29:33.000260   67451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:29:33.042092   67451 cri.go:89] found id: ""
	I0815 01:29:33.042173   67451 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:29:33.058312   67451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:29:33.067931   67451 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:29:33.067951   67451 kubeadm.go:157] found existing configuration files:
	
	I0815 01:29:33.068005   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0815 01:29:33.076467   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:29:33.076532   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:29:33.085318   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0815 01:29:33.093657   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:29:33.093710   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:29:33.102263   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0815 01:29:33.110120   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:29:33.110166   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:29:33.118497   67451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0815 01:29:33.126969   67451 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:29:33.127017   67451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:29:33.135332   67451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:29:33.143869   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:33.257728   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.000703   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.223362   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.296248   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:34.400251   67451 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:29:34.400365   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.901010   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.515518   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.015099   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:31.514899   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.015422   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:32.515483   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.015471   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:33.515843   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.015059   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:34.514953   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.015692   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:30.283919   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:30.284357   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:30.284387   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:30.284314   68431 retry.go:31] will retry after 737.00152ms: waiting for machine to come up
	I0815 01:29:31.023083   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:31.023593   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:31.023620   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:31.023541   68431 retry.go:31] will retry after 851.229647ms: waiting for machine to come up
	I0815 01:29:31.876610   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:31.877022   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:31.877051   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:31.876972   68431 retry.go:31] will retry after 914.072719ms: waiting for machine to come up
	I0815 01:29:32.792245   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:32.792723   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:32.792749   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:32.792674   68431 retry.go:31] will retry after 1.383936582s: waiting for machine to come up
	I0815 01:29:34.178425   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:34.178889   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:34.178928   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:34.178825   68431 retry.go:31] will retry after 1.574004296s: waiting for machine to come up
	I0815 01:29:32.314820   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:34.812868   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:35.400782   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.900844   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.400575   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.900769   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.916400   67451 api_server.go:72] duration metric: took 2.516148893s to wait for apiserver process to appear ...
	I0815 01:29:36.916432   67451 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:29:36.916458   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.650207   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:39.650234   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:39.650246   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.704636   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:29:39.704687   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:29:39.917074   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:39.921711   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:39.921742   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:35.514869   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.015361   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:36.515461   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.015560   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:37.514995   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.015431   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:38.515382   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.014971   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:39.515702   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:40.015185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:35.754518   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:35.755025   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:35.755049   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:35.754951   68431 retry.go:31] will retry after 1.763026338s: waiting for machine to come up
	I0815 01:29:37.519406   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:37.519910   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:37.519940   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:37.519857   68431 retry.go:31] will retry after 1.953484546s: waiting for machine to come up
	I0815 01:29:39.475118   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:39.475481   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:39.475617   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:39.475446   68431 retry.go:31] will retry after 3.524055081s: waiting for machine to come up
	I0815 01:29:36.813811   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:39.312364   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:40.417362   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:40.421758   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:40.421793   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:40.917290   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:40.929914   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:29:40.929979   67451 api_server.go:103] status: https://192.168.39.223:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:29:41.417095   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:29:41.422436   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 200:
	ok
	I0815 01:29:41.430162   67451 api_server.go:141] control plane version: v1.31.0
	I0815 01:29:41.430190   67451 api_server.go:131] duration metric: took 4.513750685s to wait for apiserver health ...
	I0815 01:29:41.430201   67451 cni.go:84] Creating CNI manager for ""
	I0815 01:29:41.430210   67451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:29:41.432041   67451 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:29:41.433158   67451 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:29:41.465502   67451 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:29:41.488013   67451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:29:41.500034   67451 system_pods.go:59] 8 kube-system pods found
	I0815 01:29:41.500063   67451 system_pods.go:61] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:29:41.500071   67451 system_pods.go:61] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:29:41.500087   67451 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:29:41.500098   67451 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:29:41.500102   67451 system_pods.go:61] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:29:41.500107   67451 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:29:41.500117   67451 system_pods.go:61] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:29:41.500120   67451 system_pods.go:61] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:29:41.500126   67451 system_pods.go:74] duration metric: took 12.091408ms to wait for pod list to return data ...
	I0815 01:29:41.500137   67451 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:29:41.505113   67451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:29:41.505137   67451 node_conditions.go:123] node cpu capacity is 2
	I0815 01:29:41.505154   67451 node_conditions.go:105] duration metric: took 5.005028ms to run NodePressure ...
	I0815 01:29:41.505170   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:29:41.761818   67451 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:29:41.767941   67451 kubeadm.go:739] kubelet initialised
	I0815 01:29:41.767972   67451 kubeadm.go:740] duration metric: took 6.119306ms waiting for restarted kubelet to initialise ...
	I0815 01:29:41.767980   67451 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:41.774714   67451 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.782833   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.782861   67451 pod_ready.go:81] duration metric: took 8.124705ms for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.782870   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.782877   67451 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.790225   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.790248   67451 pod_ready.go:81] duration metric: took 7.36386ms for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.790259   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.790265   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.797569   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.797592   67451 pod_ready.go:81] duration metric: took 7.320672ms for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.797605   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.797611   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:41.891391   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.891423   67451 pod_ready.go:81] duration metric: took 93.801865ms for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:41.891435   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:41.891442   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:42.291752   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-proxy-s8mfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.291780   67451 pod_ready.go:81] duration metric: took 400.332851ms for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:42.291789   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-proxy-s8mfb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.291795   67451 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:42.691923   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.691958   67451 pod_ready.go:81] duration metric: took 400.15227ms for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:42.691970   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:42.691977   67451 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:43.091932   67451 pod_ready.go:97] node "default-k8s-diff-port-018537" hosting pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:43.091958   67451 pod_ready.go:81] duration metric: took 399.974795ms for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	E0815 01:29:43.091970   67451 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-018537" hosting pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:43.091976   67451 pod_ready.go:38] duration metric: took 1.323989077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:43.091990   67451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:29:43.103131   67451 ops.go:34] apiserver oom_adj: -16
	I0815 01:29:43.103155   67451 kubeadm.go:597] duration metric: took 10.126043167s to restartPrimaryControlPlane
	I0815 01:29:43.103165   67451 kubeadm.go:394] duration metric: took 10.171275892s to StartCluster
	I0815 01:29:43.103183   67451 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:43.103269   67451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:29:43.105655   67451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:29:43.105963   67451 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.223 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:29:43.106027   67451 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:29:43.106123   67451 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106142   67451 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106162   67451 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.106178   67451 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:29:43.106187   67451 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-018537"
	I0815 01:29:43.106200   67451 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-018537"
	I0815 01:29:43.106226   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.106255   67451 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.106274   67451 addons.go:243] addon metrics-server should already be in state true
	I0815 01:29:43.106203   67451 config.go:182] Loaded profile config "default-k8s-diff-port-018537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:43.106363   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.106702   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106731   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.106708   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106789   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.106822   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.106963   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.107834   67451 out.go:177] * Verifying Kubernetes components...
	I0815 01:29:43.109186   67451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:43.127122   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46271
	I0815 01:29:43.127378   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I0815 01:29:43.127380   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I0815 01:29:43.127678   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.127791   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.128078   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.128296   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.128323   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.128466   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.128480   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.128671   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.128844   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.129231   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.129263   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.129768   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.129817   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.130089   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.130125   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.130219   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.130448   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.134347   67451 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-018537"
	W0815 01:29:43.134366   67451 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:29:43.134394   67451 host.go:66] Checking if "default-k8s-diff-port-018537" exists ...
	I0815 01:29:43.134764   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.134801   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.148352   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0815 01:29:43.148713   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0815 01:29:43.148786   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.149196   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.149378   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.149420   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.149838   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.149863   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.149891   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.150092   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.150344   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.150698   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.152063   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.152848   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.154165   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0815 01:29:43.154664   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.155020   67451 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:43.155087   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.155110   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.155596   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.156124   67451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:29:43.156166   67451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:29:43.156340   67451 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:29:43.156366   67451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:29:43.156389   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.157988   67451 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:29:43.159283   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:29:43.159299   67451 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:29:43.159319   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.159668   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.160304   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.160373   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.160866   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.161069   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.161234   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.161395   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.162257   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.162673   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.162702   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.162838   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.163007   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.163179   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.163296   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.175175   67451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0815 01:29:43.175674   67451 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:29:43.176169   67451 main.go:141] libmachine: Using API Version  1
	I0815 01:29:43.176193   67451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:29:43.176566   67451 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:29:43.176824   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetState
	I0815 01:29:43.178342   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .DriverName
	I0815 01:29:43.178584   67451 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:29:43.178597   67451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:29:43.178615   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHHostname
	I0815 01:29:43.181058   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.181448   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:53:52", ip: ""} in network mk-default-k8s-diff-port-018537: {Iface:virbr2 ExpiryTime:2024-08-15 02:29:16 +0000 UTC Type:0 Mac:52:54:00:ec:53:52 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:default-k8s-diff-port-018537 Clientid:01:52:54:00:ec:53:52}
	I0815 01:29:43.181482   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | domain default-k8s-diff-port-018537 has defined IP address 192.168.39.223 and MAC address 52:54:00:ec:53:52 in network mk-default-k8s-diff-port-018537
	I0815 01:29:43.181577   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHPort
	I0815 01:29:43.181709   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHKeyPath
	I0815 01:29:43.181791   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .GetSSHUsername
	I0815 01:29:43.181873   67451 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/default-k8s-diff-port-018537/id_rsa Username:docker}
	I0815 01:29:43.318078   67451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:29:43.341037   67451 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-018537" to be "Ready" ...
	I0815 01:29:43.400964   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:29:43.400993   67451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:29:43.423693   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:29:43.423716   67451 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:29:43.430460   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:29:43.453562   67451 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:29:43.453587   67451 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:29:43.457038   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:29:43.495707   67451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:29:44.708047   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.25097545s)
	I0815 01:29:44.708106   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708111   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.212373458s)
	I0815 01:29:44.708119   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708129   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708141   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708135   67451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.277646183s)
	I0815 01:29:44.708182   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708201   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708391   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708409   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708419   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708428   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708531   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.708562   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708568   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708577   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.708586   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708587   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.708599   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708605   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.708613   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708648   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.708614   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.708678   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.710192   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.710210   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.710220   67451 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-018537"
	I0815 01:29:44.710196   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) DBG | Closing plugin on server side
	I0815 01:29:44.710447   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.710467   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.716452   67451 main.go:141] libmachine: Making call to close driver server
	I0815 01:29:44.716468   67451 main.go:141] libmachine: (default-k8s-diff-port-018537) Calling .Close
	I0815 01:29:44.716716   67451 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:29:44.716737   67451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:29:44.718650   67451 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0815 01:29:44.719796   67451 addons.go:510] duration metric: took 1.613772622s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0815 01:29:40.514981   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.015724   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:41.515316   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.014923   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:42.515738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.015884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.515747   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.015794   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:44.515306   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:45.015384   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:43.000581   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:43.001092   66492 main.go:141] libmachine: (no-preload-884893) DBG | unable to find current IP address of domain no-preload-884893 in network mk-no-preload-884893
	I0815 01:29:43.001116   66492 main.go:141] libmachine: (no-preload-884893) DBG | I0815 01:29:43.001045   68431 retry.go:31] will retry after 4.175502286s: waiting for machine to come up
	I0815 01:29:41.313801   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:43.814135   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:47.178102   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.178637   66492 main.go:141] libmachine: (no-preload-884893) Found IP for machine: 192.168.61.166
	I0815 01:29:47.178665   66492 main.go:141] libmachine: (no-preload-884893) Reserving static IP address...
	I0815 01:29:47.178678   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has current primary IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.179108   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "no-preload-884893", mac: "52:54:00:b7:93:c6", ip: "192.168.61.166"} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.179151   66492 main.go:141] libmachine: (no-preload-884893) DBG | skip adding static IP to network mk-no-preload-884893 - found existing host DHCP lease matching {name: "no-preload-884893", mac: "52:54:00:b7:93:c6", ip: "192.168.61.166"}
	I0815 01:29:47.179169   66492 main.go:141] libmachine: (no-preload-884893) Reserved static IP address: 192.168.61.166
	I0815 01:29:47.179188   66492 main.go:141] libmachine: (no-preload-884893) Waiting for SSH to be available...
	I0815 01:29:47.179204   66492 main.go:141] libmachine: (no-preload-884893) DBG | Getting to WaitForSSH function...
	I0815 01:29:47.181522   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.181909   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.181937   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.182038   66492 main.go:141] libmachine: (no-preload-884893) DBG | Using SSH client type: external
	I0815 01:29:47.182070   66492 main.go:141] libmachine: (no-preload-884893) DBG | Using SSH private key: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa (-rw-------)
	I0815 01:29:47.182105   66492 main.go:141] libmachine: (no-preload-884893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.166 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 01:29:47.182126   66492 main.go:141] libmachine: (no-preload-884893) DBG | About to run SSH command:
	I0815 01:29:47.182156   66492 main.go:141] libmachine: (no-preload-884893) DBG | exit 0
	I0815 01:29:47.309068   66492 main.go:141] libmachine: (no-preload-884893) DBG | SSH cmd err, output: <nil>: 
	I0815 01:29:47.309492   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetConfigRaw
	I0815 01:29:47.310181   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:47.312956   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.313296   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.313327   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.313503   66492 profile.go:143] Saving config to /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/config.json ...
	I0815 01:29:47.313720   66492 machine.go:94] provisionDockerMachine start ...
	I0815 01:29:47.313742   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:47.313965   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.315987   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.316252   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.316278   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.316399   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.316555   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.316741   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.316886   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.317071   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.317250   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.317263   66492 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 01:29:47.424862   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 01:29:47.424894   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.425125   66492 buildroot.go:166] provisioning hostname "no-preload-884893"
	I0815 01:29:47.425156   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.425353   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.428397   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.428802   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.428825   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.429003   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.429185   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.429336   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.429464   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.429650   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.429863   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.429881   66492 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-884893 && echo "no-preload-884893" | sudo tee /etc/hostname
	I0815 01:29:47.552134   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-884893
	
	I0815 01:29:47.552159   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.554997   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.555458   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.555500   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.555742   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.555975   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.556148   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.556320   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.556525   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.556707   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.556733   66492 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-884893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-884893/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-884893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 01:29:47.673572   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 01:29:47.673608   66492 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19443-13088/.minikube CaCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19443-13088/.minikube}
	I0815 01:29:47.673637   66492 buildroot.go:174] setting up certificates
	I0815 01:29:47.673653   66492 provision.go:84] configureAuth start
	I0815 01:29:47.673670   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetMachineName
	I0815 01:29:47.674016   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:47.677054   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.677491   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.677526   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.677588   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.680115   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.680510   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.680539   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.680719   66492 provision.go:143] copyHostCerts
	I0815 01:29:47.680772   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem, removing ...
	I0815 01:29:47.680789   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem
	I0815 01:29:47.680846   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/ca.pem (1078 bytes)
	I0815 01:29:47.680962   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem, removing ...
	I0815 01:29:47.680970   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem
	I0815 01:29:47.680992   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/cert.pem (1123 bytes)
	I0815 01:29:47.681057   66492 exec_runner.go:144] found /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem, removing ...
	I0815 01:29:47.681064   66492 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem
	I0815 01:29:47.681081   66492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19443-13088/.minikube/key.pem (1679 bytes)
	I0815 01:29:47.681129   66492 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem org=jenkins.no-preload-884893 san=[127.0.0.1 192.168.61.166 localhost minikube no-preload-884893]
	I0815 01:29:47.828342   66492 provision.go:177] copyRemoteCerts
	I0815 01:29:47.828395   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 01:29:47.828416   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.831163   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.831546   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.831576   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.831760   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.831948   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.832109   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.832218   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:47.914745   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0815 01:29:47.938252   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 01:29:47.960492   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 01:29:47.982681   66492 provision.go:87] duration metric: took 309.010268ms to configureAuth
	I0815 01:29:47.982714   66492 buildroot.go:189] setting minikube options for container-runtime
	I0815 01:29:47.982971   66492 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:29:47.983095   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:47.985798   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.986181   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:47.986213   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:47.986383   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:47.986584   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.986748   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:47.986935   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:47.987115   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:47.987328   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:47.987346   66492 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 01:29:48.264004   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 01:29:48.264027   66492 machine.go:97] duration metric: took 950.293757ms to provisionDockerMachine
	I0815 01:29:48.264037   66492 start.go:293] postStartSetup for "no-preload-884893" (driver="kvm2")
	I0815 01:29:48.264047   66492 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 01:29:48.264060   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.264375   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 01:29:48.264401   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.267376   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.267859   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.267888   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.268115   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.268334   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.268521   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.268713   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.351688   66492 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 01:29:48.356871   66492 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 01:29:48.356897   66492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/addons for local assets ...
	I0815 01:29:48.356977   66492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19443-13088/.minikube/files for local assets ...
	I0815 01:29:48.357078   66492 filesync.go:149] local asset: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem -> 202792.pem in /etc/ssl/certs
	I0815 01:29:48.357194   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 01:29:48.369590   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:29:48.397339   66492 start.go:296] duration metric: took 133.287217ms for postStartSetup
	I0815 01:29:48.397389   66492 fix.go:56] duration metric: took 21.196078137s for fixHost
	I0815 01:29:48.397434   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.400353   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.400792   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.400831   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.401118   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.401352   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.401509   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.401707   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.401914   66492 main.go:141] libmachine: Using SSH client type: native
	I0815 01:29:48.402132   66492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0815 01:29:48.402148   66492 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0815 01:29:48.518704   66492 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723685388.495787154
	
	I0815 01:29:48.518731   66492 fix.go:216] guest clock: 1723685388.495787154
	I0815 01:29:48.518743   66492 fix.go:229] Guest: 2024-08-15 01:29:48.495787154 +0000 UTC Remote: 2024-08-15 01:29:48.397394567 +0000 UTC m=+358.213942436 (delta=98.392587ms)
	I0815 01:29:48.518771   66492 fix.go:200] guest clock delta is within tolerance: 98.392587ms
	I0815 01:29:48.518779   66492 start.go:83] releasing machines lock for "no-preload-884893", held for 21.317569669s
	I0815 01:29:48.518808   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.519146   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:48.522001   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.522428   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.522461   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.522626   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523145   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523490   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:29:48.523580   66492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 01:29:48.523634   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.523747   66492 ssh_runner.go:195] Run: cat /version.json
	I0815 01:29:48.523768   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:29:48.527031   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527128   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527408   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.527473   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527563   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:48.527592   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:48.527709   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.527781   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:29:48.527943   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.528173   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.528177   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:29:48.528305   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.528417   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:29:48.528598   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:29:48.610614   66492 ssh_runner.go:195] Run: systemctl --version
	I0815 01:29:48.647464   66492 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 01:29:48.786666   66492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 01:29:48.792525   66492 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 01:29:48.792593   66492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 01:29:48.807904   66492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 01:29:48.807924   66492 start.go:495] detecting cgroup driver to use...
	I0815 01:29:48.807975   66492 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 01:29:48.826113   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 01:29:48.839376   66492 docker.go:217] disabling cri-docker service (if available) ...
	I0815 01:29:48.839443   66492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 01:29:48.852840   66492 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 01:29:48.866029   66492 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 01:29:48.974628   66492 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 01:29:49.141375   66492 docker.go:233] disabling docker service ...
	I0815 01:29:49.141447   66492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 01:29:49.155650   66492 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 01:29:49.168527   66492 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 01:29:49.295756   66492 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 01:29:49.430096   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 01:29:49.443508   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 01:29:49.460504   66492 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 01:29:49.460567   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.470309   66492 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 01:29:49.470376   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.480340   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.490326   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.500831   66492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 01:29:49.511629   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.522350   66492 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.541871   66492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 01:29:49.553334   66492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 01:29:49.562756   66492 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 01:29:49.562817   66492 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 01:29:49.575907   66492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 01:29:49.586017   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:29:49.709089   66492 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 01:29:49.848506   66492 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 01:29:49.848599   66492 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 01:29:49.853379   66492 start.go:563] Will wait 60s for crictl version
	I0815 01:29:49.853442   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:49.857695   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 01:29:49.897829   66492 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 01:29:49.897909   66492 ssh_runner.go:195] Run: crio --version
	I0815 01:29:49.927253   66492 ssh_runner.go:195] Run: crio --version
	I0815 01:29:49.956689   66492 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 01:29:45.345209   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:47.844877   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:49.845546   67451 node_ready.go:53] node "default-k8s-diff-port-018537" has status "Ready":"False"
	I0815 01:29:45.515828   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.015564   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:46.515829   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.014916   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:47.515308   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.014871   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:48.515182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.015946   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.514892   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.015788   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:49.957823   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetIP
	I0815 01:29:49.960376   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:49.960741   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:29:49.960771   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:29:49.960975   66492 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 01:29:49.964703   66492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:29:49.975918   66492 kubeadm.go:883] updating cluster {Name:no-preload-884893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 01:29:49.976078   66492 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 01:29:49.976130   66492 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 01:29:50.007973   66492 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 01:29:50.007997   66492 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 01:29:50.008034   66492 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:50.008076   66492 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.008092   66492 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.008147   66492 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 01:29:50.008167   66492 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.008238   66492 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.008261   66492 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.008535   66492 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.009666   66492 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.009734   66492 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.009745   66492 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:50.009748   66492 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.009734   66492 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.009768   66492 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.009775   66492 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.009801   66492 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 01:29:46.312368   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:48.312568   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.313249   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.347683   67451 node_ready.go:49] node "default-k8s-diff-port-018537" has status "Ready":"True"
	I0815 01:29:50.347704   67451 node_ready.go:38] duration metric: took 7.006638337s for node "default-k8s-diff-port-018537" to be "Ready" ...
	I0815 01:29:50.347713   67451 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:29:50.358505   67451 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.364110   67451 pod_ready.go:92] pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.364139   67451 pod_ready.go:81] duration metric: took 5.600464ms for pod "coredns-6f6b679f8f-gxdqt" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.364150   67451 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.370186   67451 pod_ready.go:92] pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.370212   67451 pod_ready.go:81] duration metric: took 6.054189ms for pod "etcd-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.370223   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.380051   67451 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:50.380089   67451 pod_ready.go:81] duration metric: took 9.848463ms for pod "kube-apiserver-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:50.380107   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.385988   67451 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.386015   67451 pod_ready.go:81] duration metric: took 2.005899675s for pod "kube-controller-manager-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.386027   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.390635   67451 pod_ready.go:92] pod "kube-proxy-s8mfb" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.390654   67451 pod_ready.go:81] duration metric: took 4.620554ms for pod "kube-proxy-s8mfb" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.390663   67451 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.745424   67451 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace has status "Ready":"True"
	I0815 01:29:52.745447   67451 pod_ready.go:81] duration metric: took 354.777631ms for pod "kube-scheduler-default-k8s-diff-port-018537" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:52.745458   67451 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	I0815 01:29:54.752243   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:50.515037   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.015346   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:51.514948   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.015826   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:52.514876   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.015522   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:53.515665   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.015480   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:54.515202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:55.014921   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:50.224358   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.237723   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0815 01:29:50.240904   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.273259   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.275978   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.277287   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.293030   66492 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0815 01:29:50.293078   66492 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.293135   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.293169   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.425265   66492 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0815 01:29:50.425285   66492 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0815 01:29:50.425307   66492 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0815 01:29:50.425319   66492 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.425319   66492 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.425326   66492 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.425367   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425374   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425375   66492 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0815 01:29:50.425390   66492 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.425415   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425409   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.425427   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.425436   66492 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0815 01:29:50.425451   66492 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.425471   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:50.438767   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.438827   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.477250   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.477290   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.477347   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.477399   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.507338   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.527412   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.618767   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.623557   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 01:29:50.623650   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.623741   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.623773   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 01:29:50.668092   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 01:29:50.738811   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 01:29:50.747865   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 01:29:50.747932   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 01:29:50.747953   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 01:29:50.747983   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.748016   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:50.748026   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 01:29:50.777047   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 01:29:50.777152   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:50.811559   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 01:29:50.811678   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:50.829106   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0815 01:29:50.829115   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0815 01:29:50.829131   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.829161   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0815 01:29:50.829178   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 01:29:50.829206   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:29:50.829276   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 01:29:50.829287   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0815 01:29:50.829319   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0815 01:29:50.829360   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:50.833595   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0815 01:29:50.869008   66492 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.899406   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.070205124s)
	I0815 01:29:52.899446   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0815 01:29:52.899444   66492 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0: (2.070218931s)
	I0815 01:29:52.899466   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:52.899475   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0815 01:29:52.899477   66492 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.03044186s)
	I0815 01:29:52.899510   66492 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0815 01:29:52.899516   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 01:29:52.899534   66492 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.899573   66492 ssh_runner.go:195] Run: which crictl
	I0815 01:29:54.750498   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.850957835s)
	I0815 01:29:54.750533   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0815 01:29:54.750530   66492 ssh_runner.go:235] Completed: which crictl: (1.850936309s)
	I0815 01:29:54.750567   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:54.750593   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:54.750609   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 01:29:54.787342   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:52.314561   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:54.813265   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:56.752530   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:58.752625   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:55.515921   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:55.516020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:55.556467   66919 cri.go:89] found id: ""
	I0815 01:29:55.556495   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.556506   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:55.556514   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:55.556584   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:55.591203   66919 cri.go:89] found id: ""
	I0815 01:29:55.591227   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.591234   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:55.591240   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:55.591319   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:55.628819   66919 cri.go:89] found id: ""
	I0815 01:29:55.628847   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.628858   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:55.628865   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:55.628934   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:55.673750   66919 cri.go:89] found id: ""
	I0815 01:29:55.673779   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.673790   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:55.673798   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:55.673857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:55.717121   66919 cri.go:89] found id: ""
	I0815 01:29:55.717153   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.717164   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:55.717171   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:55.717233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:55.753387   66919 cri.go:89] found id: ""
	I0815 01:29:55.753415   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.753425   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:55.753434   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:55.753507   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:55.787148   66919 cri.go:89] found id: ""
	I0815 01:29:55.787183   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.787194   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:55.787207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:55.787272   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:55.820172   66919 cri.go:89] found id: ""
	I0815 01:29:55.820212   66919 logs.go:276] 0 containers: []
	W0815 01:29:55.820226   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:55.820238   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:55.820260   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:55.869089   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:55.869120   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:55.882614   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:55.882644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:56.004286   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:56.004364   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:56.004382   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:56.077836   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:56.077873   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:58.628976   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:29:58.642997   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:29:58.643074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:29:58.675870   66919 cri.go:89] found id: ""
	I0815 01:29:58.675906   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.675916   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:29:58.675921   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:29:58.675971   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:29:58.708231   66919 cri.go:89] found id: ""
	I0815 01:29:58.708263   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.708271   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:29:58.708277   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:29:58.708347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:29:58.744121   66919 cri.go:89] found id: ""
	I0815 01:29:58.744151   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.744162   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:29:58.744169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:29:58.744231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:29:58.783191   66919 cri.go:89] found id: ""
	I0815 01:29:58.783225   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.783238   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:29:58.783246   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:29:58.783315   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:29:58.821747   66919 cri.go:89] found id: ""
	I0815 01:29:58.821775   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.821785   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:29:58.821801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:29:58.821865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:29:58.859419   66919 cri.go:89] found id: ""
	I0815 01:29:58.859450   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.859458   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:29:58.859463   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:29:58.859520   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:29:58.900959   66919 cri.go:89] found id: ""
	I0815 01:29:58.900988   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.900999   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:29:58.901006   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:29:58.901069   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:29:58.940714   66919 cri.go:89] found id: ""
	I0815 01:29:58.940746   66919 logs.go:276] 0 containers: []
	W0815 01:29:58.940758   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:29:58.940779   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:29:58.940796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:29:58.956973   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:29:58.957004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:29:59.024399   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:29:59.024426   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:29:59.024439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:29:59.106170   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:29:59.106210   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:29:59.142151   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:29:59.142181   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:29:56.948465   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.1978264s)
	I0815 01:29:56.948496   66492 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.161116111s)
	I0815 01:29:56.948602   66492 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:29:56.948503   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0815 01:29:56.948644   66492 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:56.948718   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0815 01:29:56.985210   66492 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 01:29:56.985331   66492 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:29:58.731174   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.782427987s)
	I0815 01:29:58.731211   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0815 01:29:58.731234   66492 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:58.731284   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 01:29:58.731184   66492 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.745828896s)
	I0815 01:29:58.731343   66492 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0815 01:29:57.313743   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:29:59.814068   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:00.752802   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:02.752939   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:01.696371   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:01.709675   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:01.709748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:01.747907   66919 cri.go:89] found id: ""
	I0815 01:30:01.747934   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.747941   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:01.747949   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:01.748009   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:01.785404   66919 cri.go:89] found id: ""
	I0815 01:30:01.785429   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.785437   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:01.785442   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:01.785499   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:01.820032   66919 cri.go:89] found id: ""
	I0815 01:30:01.820060   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.820068   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:01.820073   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:01.820134   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:01.853219   66919 cri.go:89] found id: ""
	I0815 01:30:01.853257   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.853268   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:01.853276   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:01.853331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:01.895875   66919 cri.go:89] found id: ""
	I0815 01:30:01.895903   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.895915   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:01.895922   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:01.895983   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:01.929753   66919 cri.go:89] found id: ""
	I0815 01:30:01.929785   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.929796   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:01.929803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:01.929865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:01.961053   66919 cri.go:89] found id: ""
	I0815 01:30:01.961087   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.961099   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:01.961107   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:01.961174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:01.993217   66919 cri.go:89] found id: ""
	I0815 01:30:01.993247   66919 logs.go:276] 0 containers: []
	W0815 01:30:01.993258   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:01.993268   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:01.993287   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:02.051367   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:02.051400   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:02.065818   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:02.065851   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:02.150692   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:02.150721   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:02.150738   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:02.262369   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:02.262406   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:04.813873   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:04.829471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:04.829549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:04.871020   66919 cri.go:89] found id: ""
	I0815 01:30:04.871049   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.871058   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:04.871064   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:04.871131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:04.924432   66919 cri.go:89] found id: ""
	I0815 01:30:04.924462   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.924474   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:04.924480   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:04.924543   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:04.972947   66919 cri.go:89] found id: ""
	I0815 01:30:04.972979   66919 logs.go:276] 0 containers: []
	W0815 01:30:04.972991   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:04.972999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:04.973123   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:05.004748   66919 cri.go:89] found id: ""
	I0815 01:30:05.004772   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.004780   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:05.004785   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:05.004850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:05.036064   66919 cri.go:89] found id: ""
	I0815 01:30:05.036093   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.036103   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:05.036110   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:05.036174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:05.074397   66919 cri.go:89] found id: ""
	I0815 01:30:05.074430   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.074457   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:05.074467   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:05.074527   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:05.110796   66919 cri.go:89] found id: ""
	I0815 01:30:05.110821   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.110830   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:05.110836   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:05.110897   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:00.606670   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.875360613s)
	I0815 01:30:00.606701   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0815 01:30:00.606725   66492 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:30:00.606772   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0815 01:30:04.297747   66492 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.690945823s)
	I0815 01:30:04.297780   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0815 01:30:04.297811   66492 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:30:04.297881   66492 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0815 01:30:05.049009   66492 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19443-13088/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 01:30:05.049059   66492 cache_images.go:123] Successfully loaded all cached images
	I0815 01:30:05.049067   66492 cache_images.go:92] duration metric: took 15.041058069s to LoadCachedImages
	I0815 01:30:05.049083   66492 kubeadm.go:934] updating node { 192.168.61.166 8443 v1.31.0 crio true true} ...
	I0815 01:30:05.049215   66492 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-884893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 01:30:05.049295   66492 ssh_runner.go:195] Run: crio config
	I0815 01:30:05.101896   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:30:05.101915   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:30:05.101925   66492 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 01:30:05.101953   66492 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.166 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-884893 NodeName:no-preload-884893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 01:30:05.102129   66492 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.166
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-884893"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.166
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.166"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 01:30:05.102202   66492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 01:30:05.114396   66492 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 01:30:05.114464   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 01:30:05.124036   66492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0815 01:30:05.141411   66492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 01:30:05.156888   66492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0815 01:30:05.173630   66492 ssh_runner.go:195] Run: grep 192.168.61.166	control-plane.minikube.internal$ /etc/hosts
	I0815 01:30:05.177421   66492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.166	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 01:30:05.188839   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:30:02.313495   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:04.812529   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.252826   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:07.254206   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.753065   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:05.148938   66919 cri.go:89] found id: ""
	I0815 01:30:05.148960   66919 logs.go:276] 0 containers: []
	W0815 01:30:05.148968   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:05.148976   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:05.148986   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:05.202523   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:05.202553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:05.215903   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:05.215935   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:05.294685   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:05.294709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:05.294724   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:05.397494   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:05.397529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:07.946734   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.967265   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:07.967341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:08.005761   66919 cri.go:89] found id: ""
	I0815 01:30:08.005792   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.005808   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:08.005814   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:08.005878   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:08.044124   66919 cri.go:89] found id: ""
	I0815 01:30:08.044154   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.044166   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:08.044173   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:08.044238   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:08.078729   66919 cri.go:89] found id: ""
	I0815 01:30:08.078757   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.078769   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:08.078777   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:08.078841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:08.121988   66919 cri.go:89] found id: ""
	I0815 01:30:08.122020   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.122035   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:08.122042   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:08.122108   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:08.156930   66919 cri.go:89] found id: ""
	I0815 01:30:08.156956   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.156964   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:08.156969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:08.157034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:08.201008   66919 cri.go:89] found id: ""
	I0815 01:30:08.201049   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.201060   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:08.201067   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:08.201128   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:08.241955   66919 cri.go:89] found id: ""
	I0815 01:30:08.241979   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.241987   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:08.241993   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:08.242041   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:08.277271   66919 cri.go:89] found id: ""
	I0815 01:30:08.277307   66919 logs.go:276] 0 containers: []
	W0815 01:30:08.277317   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:08.277328   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:08.277343   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:08.339037   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:08.339082   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:08.355588   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:08.355617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:08.436131   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:08.436157   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:08.436170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:08.541231   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:08.541267   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:05.307306   66492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:30:05.326586   66492 certs.go:68] Setting up /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893 for IP: 192.168.61.166
	I0815 01:30:05.326606   66492 certs.go:194] generating shared ca certs ...
	I0815 01:30:05.326620   66492 certs.go:226] acquiring lock for ca certs: {Name:mka993f83e51f4a6c691ce83d5a0e61f1c8a954d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:30:05.326754   66492 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key
	I0815 01:30:05.326798   66492 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key
	I0815 01:30:05.326807   66492 certs.go:256] generating profile certs ...
	I0815 01:30:05.326885   66492 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.key
	I0815 01:30:05.326942   66492 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.key.2b09f8c1
	I0815 01:30:05.326975   66492 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.key
	I0815 01:30:05.327152   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem (1338 bytes)
	W0815 01:30:05.327216   66492 certs.go:480] ignoring /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279_empty.pem, impossibly tiny 0 bytes
	I0815 01:30:05.327231   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca-key.pem (1679 bytes)
	I0815 01:30:05.327260   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/ca.pem (1078 bytes)
	I0815 01:30:05.327292   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/cert.pem (1123 bytes)
	I0815 01:30:05.327315   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/certs/key.pem (1679 bytes)
	I0815 01:30:05.327353   66492 certs.go:484] found cert: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem (1708 bytes)
	I0815 01:30:05.328116   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 01:30:05.358988   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 01:30:05.386047   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 01:30:05.422046   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 01:30:05.459608   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 01:30:05.489226   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 01:30:05.518361   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 01:30:05.542755   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 01:30:05.567485   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/certs/20279.pem --> /usr/share/ca-certificates/20279.pem (1338 bytes)
	I0815 01:30:05.590089   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/ssl/certs/202792.pem --> /usr/share/ca-certificates/202792.pem (1708 bytes)
	I0815 01:30:05.614248   66492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19443-13088/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 01:30:05.636932   66492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 01:30:05.652645   66492 ssh_runner.go:195] Run: openssl version
	I0815 01:30:05.658261   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20279.pem && ln -fs /usr/share/ca-certificates/20279.pem /etc/ssl/certs/20279.pem"
	I0815 01:30:05.668530   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.673009   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 00:17 /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.673091   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20279.pem
	I0815 01:30:05.678803   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20279.pem /etc/ssl/certs/51391683.0"
	I0815 01:30:05.689237   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202792.pem && ln -fs /usr/share/ca-certificates/202792.pem /etc/ssl/certs/202792.pem"
	I0815 01:30:05.699211   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.703378   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 00:17 /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.703430   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202792.pem
	I0815 01:30:05.708890   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202792.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 01:30:05.718664   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 01:30:05.729058   66492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.733298   66492 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.733352   66492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 01:30:05.738793   66492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 01:30:05.749007   66492 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 01:30:05.753780   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 01:30:05.759248   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 01:30:05.764978   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 01:30:05.770728   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 01:30:05.775949   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 01:30:05.781530   66492 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 01:30:05.786881   66492 kubeadm.go:392] StartCluster: {Name:no-preload-884893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-884893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 01:30:05.786997   66492 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 01:30:05.787058   66492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:30:05.821591   66492 cri.go:89] found id: ""
	I0815 01:30:05.821662   66492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 01:30:05.832115   66492 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 01:30:05.832135   66492 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 01:30:05.832192   66492 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 01:30:05.841134   66492 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 01:30:05.842134   66492 kubeconfig.go:125] found "no-preload-884893" server: "https://192.168.61.166:8443"
	I0815 01:30:05.844248   66492 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 01:30:05.853112   66492 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.166
	I0815 01:30:05.853149   66492 kubeadm.go:1160] stopping kube-system containers ...
	I0815 01:30:05.853161   66492 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 01:30:05.853200   66492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 01:30:05.887518   66492 cri.go:89] found id: ""
	I0815 01:30:05.887591   66492 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 01:30:05.905394   66492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:30:05.914745   66492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:30:05.914763   66492 kubeadm.go:157] found existing configuration files:
	
	I0815 01:30:05.914812   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:30:05.924190   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:30:05.924244   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:30:05.933573   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:30:05.942352   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:30:05.942419   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:30:05.951109   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:30:05.959593   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:30:05.959656   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:30:05.968126   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:30:05.976084   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:30:05.976145   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:30:05.984770   66492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:30:05.993658   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:06.089280   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:06.949649   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.160787   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.231870   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:07.368542   66492 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:30:07.368644   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:07.868980   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:08.369588   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:08.395734   66492 api_server.go:72] duration metric: took 1.027190846s to wait for apiserver process to appear ...
	I0815 01:30:08.395760   66492 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:30:08.395782   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:07.313709   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:09.812159   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.394556   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.394591   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.394610   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.433312   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.433352   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.433366   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.450472   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 01:30:11.450507   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 01:30:11.895986   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:11.900580   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:30:11.900612   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:30:12.396449   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:12.402073   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 01:30:12.402097   66492 api_server.go:103] status: https://192.168.61.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 01:30:12.896742   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:30:12.902095   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 200:
	ok
	I0815 01:30:12.909261   66492 api_server.go:141] control plane version: v1.31.0
	I0815 01:30:12.909292   66492 api_server.go:131] duration metric: took 4.513523262s to wait for apiserver health ...
	I0815 01:30:12.909304   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:30:12.909312   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:30:12.911002   66492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:30:12.252177   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:14.253401   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:11.090797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:11.105873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:11.105951   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:11.139481   66919 cri.go:89] found id: ""
	I0815 01:30:11.139509   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.139520   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:11.139528   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:11.139586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:11.176291   66919 cri.go:89] found id: ""
	I0815 01:30:11.176320   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.176329   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:11.176336   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:11.176408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:11.212715   66919 cri.go:89] found id: ""
	I0815 01:30:11.212750   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.212760   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:11.212766   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:11.212824   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:11.247283   66919 cri.go:89] found id: ""
	I0815 01:30:11.247311   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.247321   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:11.247328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:11.247391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:11.280285   66919 cri.go:89] found id: ""
	I0815 01:30:11.280319   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.280332   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:11.280339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:11.280407   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:11.317883   66919 cri.go:89] found id: ""
	I0815 01:30:11.317911   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.317930   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:11.317937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:11.317998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:11.355178   66919 cri.go:89] found id: ""
	I0815 01:30:11.355208   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.355220   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:11.355227   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:11.355287   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:11.390965   66919 cri.go:89] found id: ""
	I0815 01:30:11.390992   66919 logs.go:276] 0 containers: []
	W0815 01:30:11.391004   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:11.391015   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:11.391030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:11.445967   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:11.446004   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:11.460539   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:11.460570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:11.537022   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:11.537043   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:11.537058   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:11.625438   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:11.625476   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.175870   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:14.189507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:14.189576   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:14.225227   66919 cri.go:89] found id: ""
	I0815 01:30:14.225255   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.225264   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:14.225271   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:14.225350   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:14.260247   66919 cri.go:89] found id: ""
	I0815 01:30:14.260276   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.260286   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:14.260294   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:14.260364   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:14.295498   66919 cri.go:89] found id: ""
	I0815 01:30:14.295528   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.295538   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:14.295552   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:14.295617   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:14.334197   66919 cri.go:89] found id: ""
	I0815 01:30:14.334228   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.334239   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:14.334247   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:14.334308   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:14.376198   66919 cri.go:89] found id: ""
	I0815 01:30:14.376232   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.376244   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:14.376252   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:14.376313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:14.416711   66919 cri.go:89] found id: ""
	I0815 01:30:14.416744   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.416755   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:14.416763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:14.416823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:14.453890   66919 cri.go:89] found id: ""
	I0815 01:30:14.453917   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.453930   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:14.453952   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:14.454024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:14.497742   66919 cri.go:89] found id: ""
	I0815 01:30:14.497768   66919 logs.go:276] 0 containers: []
	W0815 01:30:14.497776   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:14.497787   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:14.497803   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:14.511938   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:14.511980   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:14.583464   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:14.583490   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:14.583510   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:14.683497   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:14.683540   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:14.724290   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:14.724327   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:12.912470   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:30:12.924194   66492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:30:12.943292   66492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:30:12.957782   66492 system_pods.go:59] 8 kube-system pods found
	I0815 01:30:12.957825   66492 system_pods.go:61] "coredns-6f6b679f8f-flg2c" [637e4479-8f63-481a-b3d8-c5c4a35ca60a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 01:30:12.957836   66492 system_pods.go:61] "etcd-no-preload-884893" [f786f812-e4b8-41d4-bf09-1350fee38efb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 01:30:12.957848   66492 system_pods.go:61] "kube-apiserver-no-preload-884893" [128cfe47-3a25-4d2c-8869-0d2aafa69852] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 01:30:12.957859   66492 system_pods.go:61] "kube-controller-manager-no-preload-884893" [e1cce704-2092-4350-8b2d-a96b4cb90969] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 01:30:12.957870   66492 system_pods.go:61] "kube-proxy-l559z" [67d270af-bcf3-4c4a-a917-84a3b4477a98] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 01:30:12.957889   66492 system_pods.go:61] "kube-scheduler-no-preload-884893" [004b37a2-58c2-431d-b43e-de894b7fa8ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 01:30:12.957900   66492 system_pods.go:61] "metrics-server-6867b74b74-qnnqs" [397b72b1-60cb-41b6-88c4-cb0c3d9200da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:30:12.957909   66492 system_pods.go:61] "storage-provisioner" [bd489c40-fcf4-400d-af4c-913b511494e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 01:30:12.957919   66492 system_pods.go:74] duration metric: took 14.600496ms to wait for pod list to return data ...
	I0815 01:30:12.957934   66492 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:30:12.964408   66492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:30:12.964437   66492 node_conditions.go:123] node cpu capacity is 2
	I0815 01:30:12.964448   66492 node_conditions.go:105] duration metric: took 6.509049ms to run NodePressure ...
	I0815 01:30:12.964466   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 01:30:13.242145   66492 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 01:30:13.247986   66492 kubeadm.go:739] kubelet initialised
	I0815 01:30:13.248012   66492 kubeadm.go:740] duration metric: took 5.831891ms waiting for restarted kubelet to initialise ...
	I0815 01:30:13.248021   66492 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:30:13.254140   66492 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.260351   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.260378   66492 pod_ready.go:81] duration metric: took 6.20764ms for pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.260388   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "coredns-6f6b679f8f-flg2c" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.260408   66492 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.265440   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "etcd-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.265464   66492 pod_ready.go:81] duration metric: took 5.046431ms for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.265474   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "etcd-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.265481   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.271153   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "kube-apiserver-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.271173   66492 pod_ready.go:81] duration metric: took 5.686045ms for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.271181   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "kube-apiserver-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.271187   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.346976   66492 pod_ready.go:97] node "no-preload-884893" hosting pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.347001   66492 pod_ready.go:81] duration metric: took 75.806932ms for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	E0815 01:30:13.347011   66492 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-884893" hosting pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-884893" has status "Ready":"False"
	I0815 01:30:13.347018   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l559z" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.748456   66492 pod_ready.go:92] pod "kube-proxy-l559z" in "kube-system" namespace has status "Ready":"True"
	I0815 01:30:13.748480   66492 pod_ready.go:81] duration metric: took 401.453111ms for pod "kube-proxy-l559z" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:13.748491   66492 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:11.812458   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:13.813405   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:16.752797   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:19.251123   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:17.277116   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:17.290745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:17.290825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:17.324477   66919 cri.go:89] found id: ""
	I0815 01:30:17.324505   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.324512   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:17.324517   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:17.324573   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:17.356340   66919 cri.go:89] found id: ""
	I0815 01:30:17.356373   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.356384   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:17.356392   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:17.356452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:17.392696   66919 cri.go:89] found id: ""
	I0815 01:30:17.392722   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.392732   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:17.392740   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:17.392802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:17.425150   66919 cri.go:89] found id: ""
	I0815 01:30:17.425182   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.425192   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:17.425200   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:17.425266   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:17.460679   66919 cri.go:89] found id: ""
	I0815 01:30:17.460708   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.460720   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:17.460727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:17.460805   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:17.496881   66919 cri.go:89] found id: ""
	I0815 01:30:17.496914   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.496927   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:17.496933   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:17.496985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:17.528614   66919 cri.go:89] found id: ""
	I0815 01:30:17.528643   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.528668   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:17.528676   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:17.528736   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:17.563767   66919 cri.go:89] found id: ""
	I0815 01:30:17.563792   66919 logs.go:276] 0 containers: []
	W0815 01:30:17.563799   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:17.563809   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:17.563824   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:17.576591   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:17.576619   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:17.647791   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:17.647819   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:17.647832   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:17.722889   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:17.722927   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:17.761118   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:17.761154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:15.756386   66492 pod_ready.go:102] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.255794   66492 pod_ready.go:102] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:19.754538   66492 pod_ready.go:92] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:30:19.754560   66492 pod_ready.go:81] duration metric: took 6.006061814s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:19.754569   66492 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" ...
	I0815 01:30:16.313295   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:18.313960   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:21.252528   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.753406   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:20.316550   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:20.329377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:20.329452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:20.361773   66919 cri.go:89] found id: ""
	I0815 01:30:20.361805   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.361814   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:20.361820   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:20.361880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:20.394091   66919 cri.go:89] found id: ""
	I0815 01:30:20.394127   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.394138   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:20.394145   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:20.394210   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:20.426882   66919 cri.go:89] found id: ""
	I0815 01:30:20.426910   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.426929   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:20.426937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:20.426998   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:20.460629   66919 cri.go:89] found id: ""
	I0815 01:30:20.460678   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.460692   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:20.460699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:20.460764   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:20.492030   66919 cri.go:89] found id: ""
	I0815 01:30:20.492055   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.492063   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:20.492069   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:20.492127   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:20.523956   66919 cri.go:89] found id: ""
	I0815 01:30:20.523986   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.523994   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:20.523999   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:20.524058   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:20.556577   66919 cri.go:89] found id: ""
	I0815 01:30:20.556606   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.556617   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:20.556633   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:20.556714   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:20.589322   66919 cri.go:89] found id: ""
	I0815 01:30:20.589357   66919 logs.go:276] 0 containers: []
	W0815 01:30:20.589366   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:20.589374   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:20.589386   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:20.666950   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:20.666993   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:20.703065   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:20.703104   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:20.758120   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:20.758154   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:20.773332   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:20.773378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:20.839693   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.340487   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:23.352978   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:23.353034   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:23.386376   66919 cri.go:89] found id: ""
	I0815 01:30:23.386401   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.386411   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:23.386418   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:23.386480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:23.422251   66919 cri.go:89] found id: ""
	I0815 01:30:23.422275   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.422283   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:23.422288   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:23.422347   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:23.454363   66919 cri.go:89] found id: ""
	I0815 01:30:23.454394   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.454405   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:23.454410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:23.454471   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:23.487211   66919 cri.go:89] found id: ""
	I0815 01:30:23.487240   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.487249   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:23.487255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:23.487313   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:23.518655   66919 cri.go:89] found id: ""
	I0815 01:30:23.518680   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.518690   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:23.518695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:23.518749   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:23.553449   66919 cri.go:89] found id: ""
	I0815 01:30:23.553479   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.553489   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:23.553497   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:23.553549   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:23.582407   66919 cri.go:89] found id: ""
	I0815 01:30:23.582443   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.582459   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:23.582466   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:23.582519   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:23.612805   66919 cri.go:89] found id: ""
	I0815 01:30:23.612839   66919 logs.go:276] 0 containers: []
	W0815 01:30:23.612849   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:23.612861   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:23.612874   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:23.661661   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:23.661691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:23.674456   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:23.674491   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:23.742734   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:23.742758   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:23.742772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:23.828791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:23.828830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:21.761680   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.763406   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:20.812796   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:23.312044   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:25.312289   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:26.252305   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:28.752410   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:26.364924   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:26.378354   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:26.378422   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:26.410209   66919 cri.go:89] found id: ""
	I0815 01:30:26.410238   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.410248   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:26.410253   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:26.410299   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:26.443885   66919 cri.go:89] found id: ""
	I0815 01:30:26.443918   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.443929   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:26.443935   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:26.443985   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:26.475786   66919 cri.go:89] found id: ""
	I0815 01:30:26.475815   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.475826   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:26.475833   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:26.475898   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:26.510635   66919 cri.go:89] found id: ""
	I0815 01:30:26.510660   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.510669   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:26.510677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:26.510739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:26.542755   66919 cri.go:89] found id: ""
	I0815 01:30:26.542779   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.542787   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:26.542792   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:26.542842   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:26.574825   66919 cri.go:89] found id: ""
	I0815 01:30:26.574896   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.574911   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:26.574919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:26.574979   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:26.612952   66919 cri.go:89] found id: ""
	I0815 01:30:26.612980   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.612991   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:26.612998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:26.613067   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:26.645339   66919 cri.go:89] found id: ""
	I0815 01:30:26.645377   66919 logs.go:276] 0 containers: []
	W0815 01:30:26.645388   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:26.645398   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:26.645415   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:26.659206   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:26.659243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:26.727526   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:26.727552   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:26.727569   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.811277   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:26.811314   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:26.851236   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:26.851270   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.402571   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:29.415017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:29.415095   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:29.448130   66919 cri.go:89] found id: ""
	I0815 01:30:29.448151   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.448159   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:29.448164   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:29.448213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:29.484156   66919 cri.go:89] found id: ""
	I0815 01:30:29.484186   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.484195   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:29.484200   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:29.484248   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:29.519760   66919 cri.go:89] found id: ""
	I0815 01:30:29.519796   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.519806   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:29.519812   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:29.519864   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:29.551336   66919 cri.go:89] found id: ""
	I0815 01:30:29.551363   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.551372   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:29.551377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:29.551428   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:29.584761   66919 cri.go:89] found id: ""
	I0815 01:30:29.584793   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.584804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:29.584811   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:29.584875   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:29.619310   66919 cri.go:89] found id: ""
	I0815 01:30:29.619335   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.619343   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:29.619351   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:29.619408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:29.653976   66919 cri.go:89] found id: ""
	I0815 01:30:29.654005   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.654016   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:29.654030   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:29.654104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:29.685546   66919 cri.go:89] found id: ""
	I0815 01:30:29.685581   66919 logs.go:276] 0 containers: []
	W0815 01:30:29.685588   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:29.685598   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:29.685613   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:29.720766   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:29.720797   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:29.771174   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:29.771207   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:29.783951   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:29.783979   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:29.853602   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:29.853622   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:29.853634   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:26.259774   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:28.260345   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:27.312379   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:29.312991   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:31.253803   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:33.752012   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.434032   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:32.447831   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:32.447900   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:32.479056   66919 cri.go:89] found id: ""
	I0815 01:30:32.479086   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.479096   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:32.479102   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:32.479167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:32.511967   66919 cri.go:89] found id: ""
	I0815 01:30:32.512002   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.512014   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:32.512022   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:32.512094   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:32.547410   66919 cri.go:89] found id: ""
	I0815 01:30:32.547433   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.547441   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:32.547446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:32.547494   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:32.580829   66919 cri.go:89] found id: ""
	I0815 01:30:32.580857   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.580867   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:32.580874   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:32.580941   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:32.613391   66919 cri.go:89] found id: ""
	I0815 01:30:32.613502   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.613518   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:32.613529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:32.613619   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:32.645703   66919 cri.go:89] found id: ""
	I0815 01:30:32.645736   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.645747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:32.645754   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:32.645822   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:32.677634   66919 cri.go:89] found id: ""
	I0815 01:30:32.677667   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.677678   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:32.677685   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:32.677740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:32.708400   66919 cri.go:89] found id: ""
	I0815 01:30:32.708481   66919 logs.go:276] 0 containers: []
	W0815 01:30:32.708506   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:32.708521   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:32.708538   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:32.759869   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:32.759907   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:32.773110   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:32.773131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:32.840010   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:32.840031   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:32.840045   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:32.915894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:32.915948   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:30.261620   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:32.760735   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:34.761802   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:31.813543   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:33.813715   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.752452   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:37.752484   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.752536   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.461001   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:35.473803   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:35.473874   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:35.506510   66919 cri.go:89] found id: ""
	I0815 01:30:35.506532   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.506540   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:35.506546   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:35.506593   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:35.540988   66919 cri.go:89] found id: ""
	I0815 01:30:35.541018   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.541028   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:35.541033   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:35.541084   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:35.575687   66919 cri.go:89] found id: ""
	I0815 01:30:35.575713   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.575723   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:35.575730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:35.575789   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:35.606841   66919 cri.go:89] found id: ""
	I0815 01:30:35.606871   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.606878   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:35.606884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:35.606940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:35.641032   66919 cri.go:89] found id: ""
	I0815 01:30:35.641067   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.641079   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:35.641086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:35.641150   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:35.676347   66919 cri.go:89] found id: ""
	I0815 01:30:35.676381   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.676422   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:35.676433   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:35.676497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:35.713609   66919 cri.go:89] found id: ""
	I0815 01:30:35.713634   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.713648   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:35.713655   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:35.713739   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:35.751057   66919 cri.go:89] found id: ""
	I0815 01:30:35.751083   66919 logs.go:276] 0 containers: []
	W0815 01:30:35.751094   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:35.751104   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:35.751119   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:35.822909   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:35.822935   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:35.822950   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:35.904146   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:35.904186   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:35.942285   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:35.942316   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:35.990920   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:35.990959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.504900   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:38.518230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:38.518301   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:38.552402   66919 cri.go:89] found id: ""
	I0815 01:30:38.552428   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.552436   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:38.552441   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:38.552500   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:38.588617   66919 cri.go:89] found id: ""
	I0815 01:30:38.588643   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.588668   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:38.588677   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:38.588740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:38.621168   66919 cri.go:89] found id: ""
	I0815 01:30:38.621196   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.621204   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:38.621210   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:38.621258   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:38.654522   66919 cri.go:89] found id: ""
	I0815 01:30:38.654550   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.654559   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:38.654565   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:38.654631   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:38.688710   66919 cri.go:89] found id: ""
	I0815 01:30:38.688735   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.688743   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:38.688748   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:38.688802   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:38.720199   66919 cri.go:89] found id: ""
	I0815 01:30:38.720224   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.720235   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:38.720242   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:38.720304   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:38.753996   66919 cri.go:89] found id: ""
	I0815 01:30:38.754026   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.754036   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:38.754043   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:38.754102   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:38.787488   66919 cri.go:89] found id: ""
	I0815 01:30:38.787514   66919 logs.go:276] 0 containers: []
	W0815 01:30:38.787522   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:38.787530   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:38.787542   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:38.840062   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:38.840092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:38.854501   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:38.854543   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:38.933715   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:38.933749   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:38.933766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:39.010837   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:39.010871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:37.260918   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:39.263490   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:35.816265   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:38.313383   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:42.252613   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:44.751882   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:41.552027   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:41.566058   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:41.566136   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:41.603076   66919 cri.go:89] found id: ""
	I0815 01:30:41.603110   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.603123   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:41.603132   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:41.603201   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:41.637485   66919 cri.go:89] found id: ""
	I0815 01:30:41.637524   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.637536   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:41.637543   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:41.637609   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:41.671313   66919 cri.go:89] found id: ""
	I0815 01:30:41.671337   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.671345   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:41.671350   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:41.671399   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:41.704715   66919 cri.go:89] found id: ""
	I0815 01:30:41.704741   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.704752   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:41.704759   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:41.704821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:41.736357   66919 cri.go:89] found id: ""
	I0815 01:30:41.736388   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.736398   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:41.736405   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:41.736465   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:41.770373   66919 cri.go:89] found id: ""
	I0815 01:30:41.770401   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.770409   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:41.770415   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:41.770463   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:41.805965   66919 cri.go:89] found id: ""
	I0815 01:30:41.805990   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.805998   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:41.806003   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:41.806054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:41.841753   66919 cri.go:89] found id: ""
	I0815 01:30:41.841778   66919 logs.go:276] 0 containers: []
	W0815 01:30:41.841786   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:41.841794   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:41.841805   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:41.914515   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:41.914539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:41.914557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:41.988345   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:41.988380   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:42.023814   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:42.023841   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:42.075210   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:42.075243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.589738   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:44.602604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:44.602663   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:44.634203   66919 cri.go:89] found id: ""
	I0815 01:30:44.634236   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.634247   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:44.634254   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:44.634341   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:44.683449   66919 cri.go:89] found id: ""
	I0815 01:30:44.683480   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.683490   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:44.683495   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:44.683563   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:44.716485   66919 cri.go:89] found id: ""
	I0815 01:30:44.716509   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.716520   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:44.716527   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:44.716595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:44.755708   66919 cri.go:89] found id: ""
	I0815 01:30:44.755737   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.755746   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:44.755755   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:44.755823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:44.791754   66919 cri.go:89] found id: ""
	I0815 01:30:44.791781   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.791790   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:44.791796   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:44.791867   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:44.825331   66919 cri.go:89] found id: ""
	I0815 01:30:44.825355   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.825363   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:44.825369   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:44.825416   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:44.861680   66919 cri.go:89] found id: ""
	I0815 01:30:44.861705   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.861713   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:44.861718   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:44.861770   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:44.898810   66919 cri.go:89] found id: ""
	I0815 01:30:44.898844   66919 logs.go:276] 0 containers: []
	W0815 01:30:44.898857   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:44.898867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:44.898881   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:44.949416   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:44.949449   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:44.964230   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:44.964258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:45.038989   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:45.039012   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:45.039027   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:45.116311   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:45.116345   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:41.760941   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:43.764802   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:40.811825   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:42.813489   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:45.312497   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:46.753090   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:49.252535   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.658176   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:47.671312   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:47.671375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:47.705772   66919 cri.go:89] found id: ""
	I0815 01:30:47.705800   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.705812   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:47.705819   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:47.705882   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:47.737812   66919 cri.go:89] found id: ""
	I0815 01:30:47.737846   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.737857   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:47.737864   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:47.737928   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:47.773079   66919 cri.go:89] found id: ""
	I0815 01:30:47.773103   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.773114   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:47.773121   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:47.773184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:47.804941   66919 cri.go:89] found id: ""
	I0815 01:30:47.804970   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.804980   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:47.804990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:47.805053   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:47.841215   66919 cri.go:89] found id: ""
	I0815 01:30:47.841249   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.841260   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:47.841266   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:47.841322   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:47.872730   66919 cri.go:89] found id: ""
	I0815 01:30:47.872761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.872772   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:47.872780   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:47.872833   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:47.905731   66919 cri.go:89] found id: ""
	I0815 01:30:47.905761   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.905769   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:47.905774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:47.905825   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:47.939984   66919 cri.go:89] found id: ""
	I0815 01:30:47.940017   66919 logs.go:276] 0 containers: []
	W0815 01:30:47.940028   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:47.940040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:47.940053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:47.989493   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:47.989526   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:48.002567   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:48.002605   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:48.066691   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:48.066709   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:48.066720   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:48.142512   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:48.142551   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:46.260920   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:48.761706   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:47.813316   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:50.311266   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:51.253220   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.751360   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:50.681288   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:50.695289   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:50.695358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:50.729264   66919 cri.go:89] found id: ""
	I0815 01:30:50.729293   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.729303   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:50.729310   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:50.729374   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:50.765308   66919 cri.go:89] found id: ""
	I0815 01:30:50.765337   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.765348   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:50.765354   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:50.765421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:50.801332   66919 cri.go:89] found id: ""
	I0815 01:30:50.801362   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.801382   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:50.801391   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:50.801452   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:50.834822   66919 cri.go:89] found id: ""
	I0815 01:30:50.834855   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.834866   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:50.834873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:50.834937   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:50.868758   66919 cri.go:89] found id: ""
	I0815 01:30:50.868785   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.868804   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:50.868817   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:50.868886   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:50.902003   66919 cri.go:89] found id: ""
	I0815 01:30:50.902035   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.902046   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:50.902053   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:50.902113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:50.934517   66919 cri.go:89] found id: ""
	I0815 01:30:50.934546   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.934562   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:50.934569   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:50.934628   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:50.968195   66919 cri.go:89] found id: ""
	I0815 01:30:50.968224   66919 logs.go:276] 0 containers: []
	W0815 01:30:50.968233   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:50.968244   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:50.968258   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:51.019140   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:51.019176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:51.032046   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:51.032072   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:51.109532   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:51.109555   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:51.109571   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:51.186978   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:51.187021   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:53.734145   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:53.747075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:53.747146   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:53.779774   66919 cri.go:89] found id: ""
	I0815 01:30:53.779800   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.779807   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:53.779812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:53.779861   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:53.813079   66919 cri.go:89] found id: ""
	I0815 01:30:53.813119   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.813130   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:53.813137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:53.813198   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:53.847148   66919 cri.go:89] found id: ""
	I0815 01:30:53.847179   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.847188   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:53.847195   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:53.847261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:53.880562   66919 cri.go:89] found id: ""
	I0815 01:30:53.880589   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.880596   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:53.880604   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:53.880666   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:53.913334   66919 cri.go:89] found id: ""
	I0815 01:30:53.913364   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.913372   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:53.913378   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:53.913436   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:53.946008   66919 cri.go:89] found id: ""
	I0815 01:30:53.946042   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.946052   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:53.946057   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:53.946111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:53.978557   66919 cri.go:89] found id: ""
	I0815 01:30:53.978586   66919 logs.go:276] 0 containers: []
	W0815 01:30:53.978595   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:53.978600   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:53.978653   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:54.010358   66919 cri.go:89] found id: ""
	I0815 01:30:54.010385   66919 logs.go:276] 0 containers: []
	W0815 01:30:54.010392   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:54.010401   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:54.010413   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:54.059780   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:54.059815   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:54.073397   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:54.073428   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:54.140996   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:54.141024   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:54.141039   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:54.215401   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:54.215437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:51.261078   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:53.261318   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:52.315214   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:54.813501   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:55.751557   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.766434   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:56.756848   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:56.769371   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:56.769434   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:56.806021   66919 cri.go:89] found id: ""
	I0815 01:30:56.806046   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.806076   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:56.806100   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:56.806170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:56.855347   66919 cri.go:89] found id: ""
	I0815 01:30:56.855377   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.855393   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:56.855400   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:56.855464   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:56.898669   66919 cri.go:89] found id: ""
	I0815 01:30:56.898700   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.898710   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:56.898717   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:56.898785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:56.955078   66919 cri.go:89] found id: ""
	I0815 01:30:56.955112   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.955124   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:56.955131   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:56.955205   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:30:56.987638   66919 cri.go:89] found id: ""
	I0815 01:30:56.987666   66919 logs.go:276] 0 containers: []
	W0815 01:30:56.987674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:30:56.987680   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:30:56.987729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:30:57.019073   66919 cri.go:89] found id: ""
	I0815 01:30:57.019101   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.019109   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:30:57.019114   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:30:57.019170   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:30:57.051695   66919 cri.go:89] found id: ""
	I0815 01:30:57.051724   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.051735   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:30:57.051742   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:30:57.051804   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:30:57.085066   66919 cri.go:89] found id: ""
	I0815 01:30:57.085095   66919 logs.go:276] 0 containers: []
	W0815 01:30:57.085106   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:30:57.085117   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:30:57.085131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:30:57.134043   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:30:57.134080   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:57.147838   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:30:57.147871   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:30:57.221140   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:30:57.221174   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:30:57.221190   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:30:57.302571   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:30:57.302607   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:30:59.841296   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:30:59.854638   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:30:59.854700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:30:59.885940   66919 cri.go:89] found id: ""
	I0815 01:30:59.885963   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.885971   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:30:59.885976   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:30:59.886026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:30:59.918783   66919 cri.go:89] found id: ""
	I0815 01:30:59.918812   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.918824   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:30:59.918832   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:30:59.918905   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:30:59.952122   66919 cri.go:89] found id: ""
	I0815 01:30:59.952153   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.952163   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:30:59.952169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:30:59.952233   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:30:59.987303   66919 cri.go:89] found id: ""
	I0815 01:30:59.987331   66919 logs.go:276] 0 containers: []
	W0815 01:30:59.987339   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:30:59.987344   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:30:59.987410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:00.024606   66919 cri.go:89] found id: ""
	I0815 01:31:00.024640   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.024666   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:00.024677   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:00.024738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:00.055993   66919 cri.go:89] found id: ""
	I0815 01:31:00.056020   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.056031   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:00.056039   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:00.056104   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:00.087128   66919 cri.go:89] found id: ""
	I0815 01:31:00.087161   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.087173   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:00.087180   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:00.087249   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:00.120436   66919 cri.go:89] found id: ""
	I0815 01:31:00.120465   66919 logs.go:276] 0 containers: []
	W0815 01:31:00.120476   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:00.120488   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:00.120503   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:30:55.261504   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.762139   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:57.312874   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:30:59.811724   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.252248   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.751908   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:00.133810   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:00.133838   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:00.199949   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:00.199971   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:00.199984   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:00.284740   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:00.284778   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.321791   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:00.321827   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:02.873253   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:02.885846   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:02.885925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:02.924698   66919 cri.go:89] found id: ""
	I0815 01:31:02.924727   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.924739   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:02.924745   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:02.924807   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:02.961352   66919 cri.go:89] found id: ""
	I0815 01:31:02.961383   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.961391   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:02.961396   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:02.961450   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:02.996293   66919 cri.go:89] found id: ""
	I0815 01:31:02.996327   66919 logs.go:276] 0 containers: []
	W0815 01:31:02.996334   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:02.996341   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:02.996391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:03.028976   66919 cri.go:89] found id: ""
	I0815 01:31:03.029005   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.029013   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:03.029019   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:03.029066   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:03.063388   66919 cri.go:89] found id: ""
	I0815 01:31:03.063425   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.063436   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:03.063445   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:03.063518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:03.099730   66919 cri.go:89] found id: ""
	I0815 01:31:03.099757   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.099767   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:03.099778   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:03.099841   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:03.132347   66919 cri.go:89] found id: ""
	I0815 01:31:03.132370   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.132380   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:03.132386   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:03.132495   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:03.165120   66919 cri.go:89] found id: ""
	I0815 01:31:03.165146   66919 logs.go:276] 0 containers: []
	W0815 01:31:03.165153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:03.165161   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:03.165173   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:03.217544   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:03.217576   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:03.232299   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:03.232341   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:03.297458   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:03.297484   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:03.297500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:03.377304   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:03.377338   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:00.261621   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:02.760996   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:04.762492   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:01.814111   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:04.311963   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.251139   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:07.252081   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:09.253611   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:05.915544   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:05.929154   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:05.929231   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:05.972008   66919 cri.go:89] found id: ""
	I0815 01:31:05.972037   66919 logs.go:276] 0 containers: []
	W0815 01:31:05.972048   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:05.972055   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:05.972119   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:06.005459   66919 cri.go:89] found id: ""
	I0815 01:31:06.005486   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.005494   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:06.005499   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:06.005550   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:06.037623   66919 cri.go:89] found id: ""
	I0815 01:31:06.037655   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.037666   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:06.037674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:06.037733   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:06.070323   66919 cri.go:89] found id: ""
	I0815 01:31:06.070347   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.070356   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:06.070361   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:06.070419   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:06.103570   66919 cri.go:89] found id: ""
	I0815 01:31:06.103593   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.103601   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:06.103606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:06.103654   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:06.136253   66919 cri.go:89] found id: ""
	I0815 01:31:06.136281   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.136291   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:06.136297   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:06.136356   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:06.170851   66919 cri.go:89] found id: ""
	I0815 01:31:06.170878   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.170890   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:06.170895   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:06.170942   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:06.205836   66919 cri.go:89] found id: ""
	I0815 01:31:06.205860   66919 logs.go:276] 0 containers: []
	W0815 01:31:06.205867   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:06.205876   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:06.205892   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:06.282838   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:06.282872   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:06.323867   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:06.323898   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:06.378187   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:06.378230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:06.393126   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:06.393160   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:06.460898   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:08.961182   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:08.973963   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:08.974048   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:09.007466   66919 cri.go:89] found id: ""
	I0815 01:31:09.007494   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.007502   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:09.007509   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:09.007567   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:09.045097   66919 cri.go:89] found id: ""
	I0815 01:31:09.045123   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.045131   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:09.045137   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:09.045187   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:09.078326   66919 cri.go:89] found id: ""
	I0815 01:31:09.078356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.078380   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:09.078389   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:09.078455   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:09.109430   66919 cri.go:89] found id: ""
	I0815 01:31:09.109460   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.109471   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:09.109478   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:09.109544   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:09.143200   66919 cri.go:89] found id: ""
	I0815 01:31:09.143225   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.143234   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:09.143239   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:09.143306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:09.179057   66919 cri.go:89] found id: ""
	I0815 01:31:09.179081   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.179089   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:09.179095   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:09.179141   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:09.213327   66919 cri.go:89] found id: ""
	I0815 01:31:09.213356   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.213368   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:09.213375   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:09.213425   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:09.246716   66919 cri.go:89] found id: ""
	I0815 01:31:09.246745   66919 logs.go:276] 0 containers: []
	W0815 01:31:09.246756   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:09.246763   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:09.246775   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:09.299075   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:09.299105   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:09.313023   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:09.313054   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:09.377521   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:09.377545   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:09.377557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:09.453791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:09.453830   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:07.260671   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:09.261005   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:06.313082   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:08.812290   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.753344   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.251251   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.991473   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:12.004615   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:12.004707   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:12.045028   66919 cri.go:89] found id: ""
	I0815 01:31:12.045057   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.045066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:12.045072   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:12.045121   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:12.077887   66919 cri.go:89] found id: ""
	I0815 01:31:12.077910   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.077920   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:12.077926   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:12.077974   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:12.110214   66919 cri.go:89] found id: ""
	I0815 01:31:12.110249   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.110260   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:12.110268   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:12.110328   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:12.142485   66919 cri.go:89] found id: ""
	I0815 01:31:12.142509   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.142516   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:12.142522   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:12.142572   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:12.176921   66919 cri.go:89] found id: ""
	I0815 01:31:12.176951   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.176962   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:12.176969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:12.177030   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:12.212093   66919 cri.go:89] found id: ""
	I0815 01:31:12.212142   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.212154   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:12.212162   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:12.212216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:12.246980   66919 cri.go:89] found id: ""
	I0815 01:31:12.247007   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.247017   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:12.247024   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:12.247082   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:12.280888   66919 cri.go:89] found id: ""
	I0815 01:31:12.280918   66919 logs.go:276] 0 containers: []
	W0815 01:31:12.280931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:12.280943   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:12.280959   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:12.333891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:12.333923   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:12.346753   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:12.346783   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:12.415652   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:12.415675   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:12.415692   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:12.494669   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:12.494706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.031185   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:15.044605   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:15.044704   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:15.081810   66919 cri.go:89] found id: ""
	I0815 01:31:15.081846   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.081860   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:15.081869   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:15.081932   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:15.113517   66919 cri.go:89] found id: ""
	I0815 01:31:15.113550   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.113562   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:15.113568   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:15.113641   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:11.762158   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:14.260892   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:11.314672   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:13.811754   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:16.751293   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.752458   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.147638   66919 cri.go:89] found id: ""
	I0815 01:31:15.147665   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.147673   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:15.147679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:15.147746   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:15.178938   66919 cri.go:89] found id: ""
	I0815 01:31:15.178966   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.178976   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:15.178990   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:15.179054   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:15.212304   66919 cri.go:89] found id: ""
	I0815 01:31:15.212333   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.212346   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:15.212353   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:15.212414   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:15.245991   66919 cri.go:89] found id: ""
	I0815 01:31:15.246012   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.246019   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:15.246025   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:15.246074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:15.280985   66919 cri.go:89] found id: ""
	I0815 01:31:15.281016   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.281034   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:15.281041   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:15.281105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:15.315902   66919 cri.go:89] found id: ""
	I0815 01:31:15.315939   66919 logs.go:276] 0 containers: []
	W0815 01:31:15.315948   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:15.315958   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:15.315973   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:15.329347   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:15.329375   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:15.400366   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:15.400388   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:15.400405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:15.479074   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:15.479118   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:15.516204   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:15.516230   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.070588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:18.083120   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:18.083196   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:18.115673   66919 cri.go:89] found id: ""
	I0815 01:31:18.115701   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.115709   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:18.115715   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:18.115772   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:18.147011   66919 cri.go:89] found id: ""
	I0815 01:31:18.147039   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.147047   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:18.147053   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:18.147126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:18.179937   66919 cri.go:89] found id: ""
	I0815 01:31:18.179960   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.179968   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:18.179973   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:18.180032   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:18.214189   66919 cri.go:89] found id: ""
	I0815 01:31:18.214216   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.214224   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:18.214230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:18.214289   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:18.252102   66919 cri.go:89] found id: ""
	I0815 01:31:18.252130   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.252137   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:18.252143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:18.252204   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:18.285481   66919 cri.go:89] found id: ""
	I0815 01:31:18.285519   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.285529   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:18.285536   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:18.285599   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:18.321609   66919 cri.go:89] found id: ""
	I0815 01:31:18.321636   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.321651   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:18.321660   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:18.321723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:18.352738   66919 cri.go:89] found id: ""
	I0815 01:31:18.352766   66919 logs.go:276] 0 containers: []
	W0815 01:31:18.352774   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:18.352782   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:18.352796   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:18.401481   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:18.401517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:18.414984   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:18.415016   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:18.485539   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:18.485559   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:18.485579   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:18.569611   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:18.569651   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:16.262086   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:18.760590   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:15.812958   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:17.813230   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:20.312988   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.255232   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.751939   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:21.109609   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:21.123972   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:21.124038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:21.157591   66919 cri.go:89] found id: ""
	I0815 01:31:21.157624   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.157636   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:21.157643   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:21.157700   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:21.192506   66919 cri.go:89] found id: ""
	I0815 01:31:21.192535   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.192545   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:21.192552   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:21.192623   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:21.224873   66919 cri.go:89] found id: ""
	I0815 01:31:21.224901   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.224912   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:21.224919   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:21.224980   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:21.258398   66919 cri.go:89] found id: ""
	I0815 01:31:21.258427   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.258438   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:21.258446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:21.258513   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:21.295754   66919 cri.go:89] found id: ""
	I0815 01:31:21.295781   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.295792   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:21.295799   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:21.295870   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:21.330174   66919 cri.go:89] found id: ""
	I0815 01:31:21.330195   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.330202   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:21.330207   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:21.330255   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:21.364565   66919 cri.go:89] found id: ""
	I0815 01:31:21.364588   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.364596   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:21.364639   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:21.364717   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:21.397889   66919 cri.go:89] found id: ""
	I0815 01:31:21.397920   66919 logs.go:276] 0 containers: []
	W0815 01:31:21.397931   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:21.397942   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:21.397961   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.471788   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:21.471822   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:21.508837   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:21.508867   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:21.560538   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:21.560575   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:21.575581   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:21.575622   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:21.647798   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.148566   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:24.160745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:24.160813   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:24.192535   66919 cri.go:89] found id: ""
	I0815 01:31:24.192558   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.192566   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:24.192572   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:24.192630   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:24.223468   66919 cri.go:89] found id: ""
	I0815 01:31:24.223499   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.223507   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:24.223513   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:24.223561   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:24.258905   66919 cri.go:89] found id: ""
	I0815 01:31:24.258931   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.258938   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:24.258944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:24.259006   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:24.298914   66919 cri.go:89] found id: ""
	I0815 01:31:24.298942   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.298949   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:24.298955   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:24.299011   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:24.331962   66919 cri.go:89] found id: ""
	I0815 01:31:24.331992   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.332003   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:24.332011   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:24.332078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:24.365984   66919 cri.go:89] found id: ""
	I0815 01:31:24.366014   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.366022   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:24.366028   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:24.366078   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:24.402397   66919 cri.go:89] found id: ""
	I0815 01:31:24.402432   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.402442   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:24.402450   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:24.402516   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:24.434662   66919 cri.go:89] found id: ""
	I0815 01:31:24.434691   66919 logs.go:276] 0 containers: []
	W0815 01:31:24.434704   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:24.434714   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:24.434730   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:24.474087   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:24.474117   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:24.524494   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:24.524533   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:24.537770   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:24.537795   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:24.608594   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:24.608634   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:24.608650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:21.260845   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:23.260974   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:22.811939   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:24.812873   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:26.252688   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.751413   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.191588   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:27.206339   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:27.206421   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:27.241277   66919 cri.go:89] found id: ""
	I0815 01:31:27.241306   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.241315   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:27.241321   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:27.241385   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:27.275952   66919 cri.go:89] found id: ""
	I0815 01:31:27.275983   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.275992   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:27.275998   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:27.276060   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:27.308320   66919 cri.go:89] found id: ""
	I0815 01:31:27.308348   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.308359   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:27.308366   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:27.308424   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:27.340957   66919 cri.go:89] found id: ""
	I0815 01:31:27.340987   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.340998   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:27.341007   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:27.341135   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:27.373078   66919 cri.go:89] found id: ""
	I0815 01:31:27.373102   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.373110   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:27.373117   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:27.373182   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:27.409250   66919 cri.go:89] found id: ""
	I0815 01:31:27.409277   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.409289   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:27.409296   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:27.409358   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:27.444244   66919 cri.go:89] found id: ""
	I0815 01:31:27.444270   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.444280   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:27.444287   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:27.444360   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:27.482507   66919 cri.go:89] found id: ""
	I0815 01:31:27.482535   66919 logs.go:276] 0 containers: []
	W0815 01:31:27.482543   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:27.482552   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:27.482570   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:27.521896   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:27.521931   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:27.575404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:27.575437   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:27.587713   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:27.587745   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:27.650431   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:27.650461   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:27.650475   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:25.761255   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:28.261210   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:27.312866   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:29.812673   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:30.752414   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:33.252178   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:30.228663   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:30.242782   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:30.242852   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:30.278385   66919 cri.go:89] found id: ""
	I0815 01:31:30.278410   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.278420   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:30.278428   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:30.278483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:30.316234   66919 cri.go:89] found id: ""
	I0815 01:31:30.316258   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.316268   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:30.316276   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:30.316335   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:30.348738   66919 cri.go:89] found id: ""
	I0815 01:31:30.348767   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.348778   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:30.348787   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:30.348851   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:30.380159   66919 cri.go:89] found id: ""
	I0815 01:31:30.380189   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.380201   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:30.380208   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:30.380261   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:30.414888   66919 cri.go:89] found id: ""
	I0815 01:31:30.414911   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.414919   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:30.414924   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:30.414977   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:30.447898   66919 cri.go:89] found id: ""
	I0815 01:31:30.447923   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.447931   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:30.447937   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:30.448024   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:30.479148   66919 cri.go:89] found id: ""
	I0815 01:31:30.479177   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.479187   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:30.479193   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:30.479245   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:30.511725   66919 cri.go:89] found id: ""
	I0815 01:31:30.511752   66919 logs.go:276] 0 containers: []
	W0815 01:31:30.511760   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:30.511768   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:30.511780   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:30.562554   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:30.562590   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:30.575869   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:30.575896   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:30.642642   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:30.642662   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:30.642675   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:30.734491   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:30.734530   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:33.276918   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:33.289942   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:33.290010   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:33.322770   66919 cri.go:89] found id: ""
	I0815 01:31:33.322799   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.322806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:33.322813   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:33.322862   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:33.359474   66919 cri.go:89] found id: ""
	I0815 01:31:33.359503   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.359513   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:33.359520   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:33.359590   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:33.391968   66919 cri.go:89] found id: ""
	I0815 01:31:33.391996   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.392007   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:33.392014   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:33.392076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:33.423830   66919 cri.go:89] found id: ""
	I0815 01:31:33.423853   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.423861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:33.423866   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:33.423914   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:33.454991   66919 cri.go:89] found id: ""
	I0815 01:31:33.455014   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.455022   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:33.455027   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:33.455076   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:33.492150   66919 cri.go:89] found id: ""
	I0815 01:31:33.492173   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.492181   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:33.492187   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:33.492236   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:33.525206   66919 cri.go:89] found id: ""
	I0815 01:31:33.525237   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.525248   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:33.525255   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:33.525331   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:33.558939   66919 cri.go:89] found id: ""
	I0815 01:31:33.558973   66919 logs.go:276] 0 containers: []
	W0815 01:31:33.558984   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:33.558995   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:33.559011   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:33.616977   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:33.617029   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:33.629850   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:33.629879   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:33.698029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:33.698052   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:33.698069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:33.776609   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:33.776641   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:30.261492   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.761417   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.761672   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:32.315096   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:34.811837   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:35.751307   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:37.753280   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.320299   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:36.333429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:36.333492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:36.366810   66919 cri.go:89] found id: ""
	I0815 01:31:36.366846   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.366858   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:36.366866   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:36.366918   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:36.405898   66919 cri.go:89] found id: ""
	I0815 01:31:36.405930   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.405942   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:36.405949   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:36.406017   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:36.471396   66919 cri.go:89] found id: ""
	I0815 01:31:36.471432   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.471445   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:36.471453   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:36.471524   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:36.504319   66919 cri.go:89] found id: ""
	I0815 01:31:36.504355   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.504367   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:36.504373   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:36.504430   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:36.542395   66919 cri.go:89] found id: ""
	I0815 01:31:36.542423   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.542431   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:36.542437   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:36.542492   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:36.576279   66919 cri.go:89] found id: ""
	I0815 01:31:36.576310   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.576320   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:36.576327   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:36.576391   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:36.609215   66919 cri.go:89] found id: ""
	I0815 01:31:36.609243   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.609251   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:36.609256   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:36.609306   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:36.641911   66919 cri.go:89] found id: ""
	I0815 01:31:36.641936   66919 logs.go:276] 0 containers: []
	W0815 01:31:36.641944   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:36.641952   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:36.641964   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:36.691751   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:36.691784   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:36.704619   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:36.704644   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:36.768328   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:36.768348   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:36.768360   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:36.843727   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:36.843759   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:39.381851   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:39.396205   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:39.396284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:39.430646   66919 cri.go:89] found id: ""
	I0815 01:31:39.430673   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.430681   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:39.430688   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:39.430751   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:39.468470   66919 cri.go:89] found id: ""
	I0815 01:31:39.468504   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.468517   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:39.468526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:39.468603   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:39.500377   66919 cri.go:89] found id: ""
	I0815 01:31:39.500407   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.500416   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:39.500423   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:39.500490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:39.532411   66919 cri.go:89] found id: ""
	I0815 01:31:39.532440   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.532447   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:39.532452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:39.532504   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:39.564437   66919 cri.go:89] found id: ""
	I0815 01:31:39.564463   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.564471   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:39.564476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:39.564528   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:39.598732   66919 cri.go:89] found id: ""
	I0815 01:31:39.598757   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.598765   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:39.598771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:39.598837   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:39.640429   66919 cri.go:89] found id: ""
	I0815 01:31:39.640457   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.640469   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:39.640476   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:39.640536   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:39.672116   66919 cri.go:89] found id: ""
	I0815 01:31:39.672142   66919 logs.go:276] 0 containers: []
	W0815 01:31:39.672151   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:39.672159   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:39.672171   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:39.721133   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:39.721170   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:39.734024   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:39.734060   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:39.799465   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:39.799487   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:39.799501   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:39.880033   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:39.880068   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:37.263319   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:39.762708   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:36.812954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:39.312718   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:40.251411   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.252627   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.750964   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:42.421276   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:42.438699   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:42.438760   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:42.473213   66919 cri.go:89] found id: ""
	I0815 01:31:42.473239   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.473246   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:42.473251   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:42.473311   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:42.509493   66919 cri.go:89] found id: ""
	I0815 01:31:42.509523   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.509533   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:42.509538   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:42.509594   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:42.543625   66919 cri.go:89] found id: ""
	I0815 01:31:42.543649   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.543659   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:42.543665   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:42.543731   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:42.581756   66919 cri.go:89] found id: ""
	I0815 01:31:42.581784   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.581794   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:42.581801   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:42.581865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:42.615595   66919 cri.go:89] found id: ""
	I0815 01:31:42.615618   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.615626   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:42.615631   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:42.615689   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:42.652938   66919 cri.go:89] found id: ""
	I0815 01:31:42.652961   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.652973   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:42.652979   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:42.653026   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:42.689362   66919 cri.go:89] found id: ""
	I0815 01:31:42.689391   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.689399   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:42.689406   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:42.689460   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:42.725880   66919 cri.go:89] found id: ""
	I0815 01:31:42.725903   66919 logs.go:276] 0 containers: []
	W0815 01:31:42.725911   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:42.725920   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:42.725932   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:42.798531   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:42.798553   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:42.798567   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:42.878583   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:42.878617   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:42.916218   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:42.916245   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:42.968613   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:42.968650   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:42.260936   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:44.262272   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:41.315219   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:43.812950   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:46.751554   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.752369   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.482622   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:45.494847   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:45.494917   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:45.526105   66919 cri.go:89] found id: ""
	I0815 01:31:45.526130   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.526139   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:45.526145   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:45.526195   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:45.558218   66919 cri.go:89] found id: ""
	I0815 01:31:45.558247   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.558258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:45.558265   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:45.558327   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:45.589922   66919 cri.go:89] found id: ""
	I0815 01:31:45.589950   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.589961   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:45.589969   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:45.590037   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:45.622639   66919 cri.go:89] found id: ""
	I0815 01:31:45.622670   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.622685   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:45.622690   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:45.622740   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:45.659274   66919 cri.go:89] found id: ""
	I0815 01:31:45.659301   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.659309   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:45.659314   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:45.659362   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:45.690768   66919 cri.go:89] found id: ""
	I0815 01:31:45.690795   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.690804   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:45.690810   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:45.690860   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:45.726862   66919 cri.go:89] found id: ""
	I0815 01:31:45.726885   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.726892   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:45.726898   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:45.726943   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:45.761115   66919 cri.go:89] found id: ""
	I0815 01:31:45.761142   66919 logs.go:276] 0 containers: []
	W0815 01:31:45.761153   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:45.761164   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:45.761179   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:45.774290   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:45.774335   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:45.843029   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:45.843053   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:45.843069   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:45.918993   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:45.919032   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:45.955647   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:45.955685   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.506376   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:48.518173   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:48.518234   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:48.550773   66919 cri.go:89] found id: ""
	I0815 01:31:48.550798   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.550806   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:48.550812   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:48.550865   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:48.582398   66919 cri.go:89] found id: ""
	I0815 01:31:48.582431   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.582442   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:48.582449   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:48.582512   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:48.613989   66919 cri.go:89] found id: ""
	I0815 01:31:48.614023   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.614036   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:48.614045   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:48.614114   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:48.645269   66919 cri.go:89] found id: ""
	I0815 01:31:48.645306   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.645317   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:48.645326   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:48.645394   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:48.680588   66919 cri.go:89] found id: ""
	I0815 01:31:48.680615   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.680627   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:48.680636   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:48.680723   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:48.719580   66919 cri.go:89] found id: ""
	I0815 01:31:48.719607   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.719615   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:48.719621   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:48.719684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:48.756573   66919 cri.go:89] found id: ""
	I0815 01:31:48.756597   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.756606   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:48.756613   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:48.756684   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:48.793983   66919 cri.go:89] found id: ""
	I0815 01:31:48.794018   66919 logs.go:276] 0 containers: []
	W0815 01:31:48.794029   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:48.794040   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:48.794053   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:48.847776   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:48.847811   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:48.870731   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:48.870762   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:48.960519   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:48.960548   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:48.960565   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:49.037502   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:49.037535   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:46.761461   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.761907   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:45.813203   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:48.313262   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:51.251455   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:53.252808   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:51.576022   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:51.589531   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:51.589595   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:51.623964   66919 cri.go:89] found id: ""
	I0815 01:31:51.623991   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.624000   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:51.624008   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:51.624074   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:51.657595   66919 cri.go:89] found id: ""
	I0815 01:31:51.657618   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.657626   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:51.657632   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:51.657681   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:51.692462   66919 cri.go:89] found id: ""
	I0815 01:31:51.692490   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.692501   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:51.692507   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:51.692570   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:51.724210   66919 cri.go:89] found id: ""
	I0815 01:31:51.724249   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.724259   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:51.724267   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:51.724329   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:51.756450   66919 cri.go:89] found id: ""
	I0815 01:31:51.756476   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.756486   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:51.756493   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:51.756555   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:51.789082   66919 cri.go:89] found id: ""
	I0815 01:31:51.789114   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.789126   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:51.789133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:51.789183   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:51.822390   66919 cri.go:89] found id: ""
	I0815 01:31:51.822420   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.822431   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:51.822438   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:51.822491   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:51.855977   66919 cri.go:89] found id: ""
	I0815 01:31:51.856004   66919 logs.go:276] 0 containers: []
	W0815 01:31:51.856014   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:51.856025   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:51.856040   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:51.904470   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:51.904500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:51.918437   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:51.918466   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:51.991742   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:51.991770   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:51.991785   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:52.065894   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:52.065926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:54.602000   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:54.616388   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:54.616466   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:54.675750   66919 cri.go:89] found id: ""
	I0815 01:31:54.675779   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.675793   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:54.675802   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:54.675857   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:54.710581   66919 cri.go:89] found id: ""
	I0815 01:31:54.710609   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.710620   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:54.710627   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:54.710691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:54.747267   66919 cri.go:89] found id: ""
	I0815 01:31:54.747304   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.747316   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:54.747325   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:54.747387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:54.784175   66919 cri.go:89] found id: ""
	I0815 01:31:54.784209   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.784221   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:54.784230   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:54.784295   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:54.820360   66919 cri.go:89] found id: ""
	I0815 01:31:54.820395   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.820405   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:54.820412   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:54.820480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:54.853176   66919 cri.go:89] found id: ""
	I0815 01:31:54.853204   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.853214   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:54.853222   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:54.853281   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:54.886063   66919 cri.go:89] found id: ""
	I0815 01:31:54.886092   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.886105   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:54.886112   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:54.886171   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:54.919495   66919 cri.go:89] found id: ""
	I0815 01:31:54.919529   66919 logs.go:276] 0 containers: []
	W0815 01:31:54.919540   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:54.919558   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:54.919574   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:54.973177   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:54.973213   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:54.986864   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:54.986899   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:55.052637   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:55.052685   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:55.052700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:51.260314   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:53.261883   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:50.812208   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:52.812356   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:54.812990   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:55.750709   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.751319   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.752400   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:55.133149   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:55.133180   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:57.672833   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:31:57.686035   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:31:57.686099   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:31:57.718612   66919 cri.go:89] found id: ""
	I0815 01:31:57.718641   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.718653   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:31:57.718661   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:31:57.718738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:31:57.752763   66919 cri.go:89] found id: ""
	I0815 01:31:57.752781   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.752788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:31:57.752793   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:31:57.752840   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:31:57.785667   66919 cri.go:89] found id: ""
	I0815 01:31:57.785697   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.785709   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:31:57.785716   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:31:57.785776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:31:57.818775   66919 cri.go:89] found id: ""
	I0815 01:31:57.818804   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.818813   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:31:57.818821   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:31:57.818881   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:31:57.853766   66919 cri.go:89] found id: ""
	I0815 01:31:57.853798   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.853809   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:31:57.853815   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:31:57.853880   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:31:57.886354   66919 cri.go:89] found id: ""
	I0815 01:31:57.886379   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.886386   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:31:57.886392   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:31:57.886453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:31:57.920522   66919 cri.go:89] found id: ""
	I0815 01:31:57.920553   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.920576   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:31:57.920583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:31:57.920648   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:31:57.952487   66919 cri.go:89] found id: ""
	I0815 01:31:57.952511   66919 logs.go:276] 0 containers: []
	W0815 01:31:57.952520   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:31:57.952528   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:31:57.952541   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:31:58.003026   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:31:58.003064   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:31:58.016516   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:31:58.016544   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:31:58.091434   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:31:58.091459   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:31:58.091500   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:31:58.170038   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:31:58.170073   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:31:55.760430   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.760719   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.761206   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:57.313073   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:31:59.812268   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:02.252033   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:04.252260   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:00.709797   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:00.724086   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:00.724162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:00.756025   66919 cri.go:89] found id: ""
	I0815 01:32:00.756056   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.756066   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:00.756073   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:00.756130   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:00.787831   66919 cri.go:89] found id: ""
	I0815 01:32:00.787858   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.787870   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:00.787880   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:00.787940   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:00.821605   66919 cri.go:89] found id: ""
	I0815 01:32:00.821637   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.821644   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:00.821649   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:00.821697   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:00.852708   66919 cri.go:89] found id: ""
	I0815 01:32:00.852732   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.852739   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:00.852745   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:00.852790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:00.885392   66919 cri.go:89] found id: ""
	I0815 01:32:00.885426   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.885437   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:00.885446   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:00.885506   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:00.916715   66919 cri.go:89] found id: ""
	I0815 01:32:00.916751   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.916763   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:00.916771   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:00.916890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:00.949028   66919 cri.go:89] found id: ""
	I0815 01:32:00.949058   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.949069   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:00.949076   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:00.949137   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:00.986364   66919 cri.go:89] found id: ""
	I0815 01:32:00.986399   66919 logs.go:276] 0 containers: []
	W0815 01:32:00.986409   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:00.986419   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:00.986433   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.036475   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:01.036517   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:01.049711   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:01.049746   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:01.117283   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:01.117310   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:01.117328   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:01.195453   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:01.195492   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:03.732372   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:03.745944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:03.746005   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:03.780527   66919 cri.go:89] found id: ""
	I0815 01:32:03.780566   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.780578   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:03.780586   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:03.780647   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:03.814147   66919 cri.go:89] found id: ""
	I0815 01:32:03.814170   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.814177   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:03.814184   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:03.814267   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:03.847375   66919 cri.go:89] found id: ""
	I0815 01:32:03.847409   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.847422   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:03.847429   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:03.847497   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:03.882859   66919 cri.go:89] found id: ""
	I0815 01:32:03.882887   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.882897   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:03.882904   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:03.882972   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:03.916490   66919 cri.go:89] found id: ""
	I0815 01:32:03.916520   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.916528   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:03.916544   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:03.916613   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:03.954789   66919 cri.go:89] found id: ""
	I0815 01:32:03.954819   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.954836   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:03.954844   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:03.954907   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:03.987723   66919 cri.go:89] found id: ""
	I0815 01:32:03.987748   66919 logs.go:276] 0 containers: []
	W0815 01:32:03.987756   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:03.987761   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:03.987810   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:04.020948   66919 cri.go:89] found id: ""
	I0815 01:32:04.020974   66919 logs.go:276] 0 containers: []
	W0815 01:32:04.020981   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:04.020990   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:04.021008   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:04.033466   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:04.033489   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:04.097962   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:04.097989   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:04.098006   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:04.174672   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:04.174706   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:04.216198   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:04.216228   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:01.761354   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:03.762268   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:02.313003   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:04.812280   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.751582   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:08.752387   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.768102   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:06.782370   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:06.782473   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:06.815958   66919 cri.go:89] found id: ""
	I0815 01:32:06.815983   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.815992   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:06.815999   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:06.816059   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:06.848701   66919 cri.go:89] found id: ""
	I0815 01:32:06.848735   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.848748   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:06.848756   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:06.848821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:06.879506   66919 cri.go:89] found id: ""
	I0815 01:32:06.879536   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.879544   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:06.879550   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:06.879607   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:06.915332   66919 cri.go:89] found id: ""
	I0815 01:32:06.915359   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.915371   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:06.915377   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:06.915438   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:06.949424   66919 cri.go:89] found id: ""
	I0815 01:32:06.949454   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.949464   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:06.949471   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:06.949518   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:06.983713   66919 cri.go:89] found id: ""
	I0815 01:32:06.983739   66919 logs.go:276] 0 containers: []
	W0815 01:32:06.983747   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:06.983753   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:06.983816   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:07.016165   66919 cri.go:89] found id: ""
	I0815 01:32:07.016196   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.016207   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:07.016214   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:07.016271   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:07.048368   66919 cri.go:89] found id: ""
	I0815 01:32:07.048399   66919 logs.go:276] 0 containers: []
	W0815 01:32:07.048410   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:07.048420   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:07.048435   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:07.100088   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:07.100128   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:07.113430   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:07.113459   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:07.178199   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:07.178223   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:07.178239   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:07.265089   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:07.265121   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:09.804733   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:09.819456   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:09.819530   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:09.850946   66919 cri.go:89] found id: ""
	I0815 01:32:09.850974   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.850981   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:09.850986   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:09.851043   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:09.888997   66919 cri.go:89] found id: ""
	I0815 01:32:09.889028   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.889039   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:09.889045   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:09.889105   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:09.921455   66919 cri.go:89] found id: ""
	I0815 01:32:09.921490   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.921503   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:09.921511   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:09.921587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:09.957365   66919 cri.go:89] found id: ""
	I0815 01:32:09.957394   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.957410   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:09.957417   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:09.957477   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:09.988716   66919 cri.go:89] found id: ""
	I0815 01:32:09.988740   66919 logs.go:276] 0 containers: []
	W0815 01:32:09.988753   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:09.988760   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:09.988823   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:10.024121   66919 cri.go:89] found id: ""
	I0815 01:32:10.024148   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.024155   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:10.024160   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:10.024208   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:10.056210   66919 cri.go:89] found id: ""
	I0815 01:32:10.056237   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.056247   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:10.056253   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:10.056314   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:10.087519   66919 cri.go:89] found id: ""
	I0815 01:32:10.087551   66919 logs.go:276] 0 containers: []
	W0815 01:32:10.087562   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:10.087574   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:10.087589   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:06.260821   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:08.760889   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:06.813185   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:09.312608   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:11.251168   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.252911   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:10.142406   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:10.142446   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:10.156134   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:10.156176   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:10.230397   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:10.230419   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:10.230432   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:10.315187   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:10.315221   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:12.852055   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:12.864410   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:12.864479   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:12.895777   66919 cri.go:89] found id: ""
	I0815 01:32:12.895811   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.895821   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:12.895831   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:12.895902   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:12.928135   66919 cri.go:89] found id: ""
	I0815 01:32:12.928161   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.928171   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:12.928178   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:12.928244   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:12.961837   66919 cri.go:89] found id: ""
	I0815 01:32:12.961867   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.961878   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:12.961885   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:12.961947   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:12.997899   66919 cri.go:89] found id: ""
	I0815 01:32:12.997928   66919 logs.go:276] 0 containers: []
	W0815 01:32:12.997939   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:12.997946   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:12.998008   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:13.032686   66919 cri.go:89] found id: ""
	I0815 01:32:13.032716   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.032725   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:13.032730   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:13.032783   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:13.064395   66919 cri.go:89] found id: ""
	I0815 01:32:13.064431   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.064444   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:13.064452   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:13.064522   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:13.103618   66919 cri.go:89] found id: ""
	I0815 01:32:13.103646   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.103655   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:13.103661   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:13.103711   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:13.137650   66919 cri.go:89] found id: ""
	I0815 01:32:13.137684   66919 logs.go:276] 0 containers: []
	W0815 01:32:13.137694   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:13.137702   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:13.137715   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:13.189803   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:13.189836   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:13.204059   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:13.204091   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:13.273702   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:13.273723   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:13.273735   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:13.358979   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:13.359037   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:11.260422   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.260760   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:11.812182   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:13.812777   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:15.752291   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:17.752500   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:15.899388   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:15.911944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:15.912013   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:15.946179   66919 cri.go:89] found id: ""
	I0815 01:32:15.946206   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.946215   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:15.946223   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:15.946284   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:15.979700   66919 cri.go:89] found id: ""
	I0815 01:32:15.979725   66919 logs.go:276] 0 containers: []
	W0815 01:32:15.979732   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:15.979738   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:15.979784   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:16.013003   66919 cri.go:89] found id: ""
	I0815 01:32:16.013033   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.013044   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:16.013056   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:16.013113   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:16.044824   66919 cri.go:89] found id: ""
	I0815 01:32:16.044851   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.044861   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:16.044868   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:16.044930   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:16.076193   66919 cri.go:89] found id: ""
	I0815 01:32:16.076219   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.076227   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:16.076232   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:16.076280   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:16.113747   66919 cri.go:89] found id: ""
	I0815 01:32:16.113775   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.113785   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:16.113795   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:16.113855   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:16.145504   66919 cri.go:89] found id: ""
	I0815 01:32:16.145547   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.145560   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:16.145568   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:16.145637   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:16.181581   66919 cri.go:89] found id: ""
	I0815 01:32:16.181613   66919 logs.go:276] 0 containers: []
	W0815 01:32:16.181623   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:16.181634   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:16.181655   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:16.223644   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:16.223687   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:16.279096   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:16.279131   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:16.292132   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:16.292161   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:16.360605   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:16.360624   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:16.360636   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:18.938884   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:18.951884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:18.951966   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:18.989163   66919 cri.go:89] found id: ""
	I0815 01:32:18.989192   66919 logs.go:276] 0 containers: []
	W0815 01:32:18.989201   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:18.989206   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:18.989256   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:19.025915   66919 cri.go:89] found id: ""
	I0815 01:32:19.025943   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.025952   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:19.025960   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:19.026028   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:19.062863   66919 cri.go:89] found id: ""
	I0815 01:32:19.062889   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.062899   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:19.062907   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:19.062969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:19.099336   66919 cri.go:89] found id: ""
	I0815 01:32:19.099358   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.099369   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:19.099383   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:19.099442   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:19.130944   66919 cri.go:89] found id: ""
	I0815 01:32:19.130977   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.130988   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:19.130995   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:19.131056   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:19.161353   66919 cri.go:89] found id: ""
	I0815 01:32:19.161381   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.161391   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:19.161398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:19.161454   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:19.195867   66919 cri.go:89] found id: ""
	I0815 01:32:19.195902   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.195915   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:19.195923   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:19.195993   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:19.228851   66919 cri.go:89] found id: ""
	I0815 01:32:19.228886   66919 logs.go:276] 0 containers: []
	W0815 01:32:19.228899   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:19.228919   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:19.228938   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:19.281284   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:19.281320   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:19.294742   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:19.294771   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:19.364684   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:19.364708   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:19.364722   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:19.451057   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:19.451092   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:15.261508   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:17.261956   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:19.760608   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:16.312855   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:18.811382   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:20.251898   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:22.252179   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:24.252312   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:21.989302   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:22.002691   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:22.002755   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:22.037079   66919 cri.go:89] found id: ""
	I0815 01:32:22.037101   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.037109   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:22.037115   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:22.037162   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:22.069804   66919 cri.go:89] found id: ""
	I0815 01:32:22.069833   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.069842   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:22.069848   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:22.069919   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.102474   66919 cri.go:89] found id: ""
	I0815 01:32:22.102503   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.102515   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:22.102523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:22.102587   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:22.137416   66919 cri.go:89] found id: ""
	I0815 01:32:22.137442   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.137449   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:22.137454   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:22.137511   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:22.171153   66919 cri.go:89] found id: ""
	I0815 01:32:22.171182   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.171191   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:22.171198   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:22.171259   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:22.207991   66919 cri.go:89] found id: ""
	I0815 01:32:22.208020   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.208029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:22.208038   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:22.208111   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:22.245727   66919 cri.go:89] found id: ""
	I0815 01:32:22.245757   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.245767   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:22.245774   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:22.245838   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:22.284478   66919 cri.go:89] found id: ""
	I0815 01:32:22.284502   66919 logs.go:276] 0 containers: []
	W0815 01:32:22.284510   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:22.284518   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:22.284529   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:22.297334   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:22.297378   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:22.369318   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:22.369342   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:22.369356   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:22.445189   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:22.445226   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:22.486563   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:22.486592   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.037875   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:25.051503   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:25.051580   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:25.090579   66919 cri.go:89] found id: ""
	I0815 01:32:25.090610   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.090622   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:25.090629   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:25.090691   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:25.123683   66919 cri.go:89] found id: ""
	I0815 01:32:25.123711   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.123722   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:25.123729   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:25.123790   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:22.261478   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:24.760607   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:20.812971   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:23.311523   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:25.313928   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:26.752024   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.252947   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:25.155715   66919 cri.go:89] found id: ""
	I0815 01:32:25.155744   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.155752   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:25.155757   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:25.155806   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:25.186654   66919 cri.go:89] found id: ""
	I0815 01:32:25.186680   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.186688   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:25.186694   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:25.186741   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:25.218636   66919 cri.go:89] found id: ""
	I0815 01:32:25.218665   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.218674   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:25.218679   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:25.218729   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:25.250018   66919 cri.go:89] found id: ""
	I0815 01:32:25.250046   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.250116   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:25.250147   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:25.250219   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:25.283374   66919 cri.go:89] found id: ""
	I0815 01:32:25.283403   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.283413   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:25.283420   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:25.283483   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:25.315240   66919 cri.go:89] found id: ""
	I0815 01:32:25.315260   66919 logs.go:276] 0 containers: []
	W0815 01:32:25.315267   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:25.315274   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:25.315286   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:25.367212   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:25.367243   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:25.380506   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:25.380531   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:25.441106   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:25.441129   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:25.441145   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:25.522791   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:25.522828   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.061984   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:28.075091   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:28.075149   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:28.110375   66919 cri.go:89] found id: ""
	I0815 01:32:28.110407   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.110419   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:28.110426   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:28.110490   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:28.146220   66919 cri.go:89] found id: ""
	I0815 01:32:28.146249   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.146258   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:28.146264   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:28.146317   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:28.177659   66919 cri.go:89] found id: ""
	I0815 01:32:28.177691   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.177702   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:28.177708   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:28.177776   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:28.209729   66919 cri.go:89] found id: ""
	I0815 01:32:28.209759   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.209768   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:28.209775   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:28.209835   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:28.241605   66919 cri.go:89] found id: ""
	I0815 01:32:28.241633   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.241642   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:28.241646   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:28.241706   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:28.276697   66919 cri.go:89] found id: ""
	I0815 01:32:28.276722   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.276730   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:28.276735   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:28.276785   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:28.309109   66919 cri.go:89] found id: ""
	I0815 01:32:28.309134   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.309144   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:28.309151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:28.309213   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:28.348262   66919 cri.go:89] found id: ""
	I0815 01:32:28.348289   66919 logs.go:276] 0 containers: []
	W0815 01:32:28.348303   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:28.348315   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:28.348329   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:28.387270   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:28.387296   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:28.440454   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:28.440504   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:28.453203   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:28.453233   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:28.523080   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:28.523106   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:28.523123   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:26.761742   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.261323   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:27.812457   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:29.812954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:31.253078   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:33.755301   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:31.098144   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:31.111396   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:31.111469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:31.143940   66919 cri.go:89] found id: ""
	I0815 01:32:31.143969   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.143977   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:31.143983   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:31.144038   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:31.175393   66919 cri.go:89] found id: ""
	I0815 01:32:31.175421   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.175439   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:31.175447   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:31.175509   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:31.213955   66919 cri.go:89] found id: ""
	I0815 01:32:31.213984   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.213993   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:31.213998   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:31.214047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:31.245836   66919 cri.go:89] found id: ""
	I0815 01:32:31.245861   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.245868   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:31.245873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:31.245936   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:31.279290   66919 cri.go:89] found id: ""
	I0815 01:32:31.279317   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.279327   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:31.279334   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:31.279408   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:31.313898   66919 cri.go:89] found id: ""
	I0815 01:32:31.313926   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.313937   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:31.313944   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:31.314020   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:31.344466   66919 cri.go:89] found id: ""
	I0815 01:32:31.344502   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.344513   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:31.344521   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:31.344586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:31.375680   66919 cri.go:89] found id: ""
	I0815 01:32:31.375709   66919 logs.go:276] 0 containers: []
	W0815 01:32:31.375721   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:31.375732   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:31.375747   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:31.457005   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:31.457048   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.494656   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:31.494691   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:31.546059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:31.546096   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:31.559523   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:31.559553   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:31.628402   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.128980   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:34.142151   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:34.142216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:34.189425   66919 cri.go:89] found id: ""
	I0815 01:32:34.189453   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.189464   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:34.189470   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:34.189533   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:34.222360   66919 cri.go:89] found id: ""
	I0815 01:32:34.222385   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.222392   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:34.222398   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:34.222453   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:34.256275   66919 cri.go:89] found id: ""
	I0815 01:32:34.256302   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.256314   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:34.256322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:34.256387   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:34.294104   66919 cri.go:89] found id: ""
	I0815 01:32:34.294130   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.294137   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:34.294143   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:34.294214   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:34.330163   66919 cri.go:89] found id: ""
	I0815 01:32:34.330193   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.330205   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:34.330213   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:34.330278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:34.363436   66919 cri.go:89] found id: ""
	I0815 01:32:34.363464   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.363475   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:34.363483   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:34.363540   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:34.399733   66919 cri.go:89] found id: ""
	I0815 01:32:34.399761   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.399772   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:34.399779   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:34.399832   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:34.433574   66919 cri.go:89] found id: ""
	I0815 01:32:34.433781   66919 logs.go:276] 0 containers: []
	W0815 01:32:34.433804   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:34.433820   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:34.433839   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:34.488449   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:34.488496   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:34.502743   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:34.502776   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:34.565666   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:34.565701   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:34.565718   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:34.639463   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:34.639498   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:31.262299   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:33.760758   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:32.313372   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:34.812259   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:36.251156   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:38.252330   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:37.189617   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:37.202695   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:37.202766   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:37.235556   66919 cri.go:89] found id: ""
	I0815 01:32:37.235589   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.235600   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:37.235608   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:37.235669   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:37.271110   66919 cri.go:89] found id: ""
	I0815 01:32:37.271139   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.271150   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:37.271158   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:37.271216   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:37.304294   66919 cri.go:89] found id: ""
	I0815 01:32:37.304325   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.304332   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:37.304337   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:37.304398   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:37.337271   66919 cri.go:89] found id: ""
	I0815 01:32:37.337297   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.337309   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:37.337317   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:37.337377   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:37.373088   66919 cri.go:89] found id: ""
	I0815 01:32:37.373115   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.373126   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:37.373133   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:37.373184   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:37.407978   66919 cri.go:89] found id: ""
	I0815 01:32:37.408003   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.408011   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:37.408016   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:37.408065   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:37.441966   66919 cri.go:89] found id: ""
	I0815 01:32:37.441999   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.442009   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:37.442017   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:37.442079   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:37.473670   66919 cri.go:89] found id: ""
	I0815 01:32:37.473699   66919 logs.go:276] 0 containers: []
	W0815 01:32:37.473710   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:37.473720   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:37.473740   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:37.509174   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:37.509208   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:37.560059   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:37.560099   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:37.574425   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:37.574458   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:37.639177   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:37.639199   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:37.639216   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:36.260796   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:38.261082   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:36.813759   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:39.312862   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:40.752526   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:43.251946   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:40.218504   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:40.231523   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:40.231626   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:40.266065   66919 cri.go:89] found id: ""
	I0815 01:32:40.266092   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.266102   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:40.266109   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:40.266174   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:40.298717   66919 cri.go:89] found id: ""
	I0815 01:32:40.298749   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.298759   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:40.298767   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:40.298821   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:40.330633   66919 cri.go:89] found id: ""
	I0815 01:32:40.330660   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.330668   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:40.330674   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:40.330738   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:40.367840   66919 cri.go:89] found id: ""
	I0815 01:32:40.367866   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.367876   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:40.367884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:40.367953   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:40.403883   66919 cri.go:89] found id: ""
	I0815 01:32:40.403910   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.403921   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:40.403927   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:40.404001   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:40.433989   66919 cri.go:89] found id: ""
	I0815 01:32:40.434016   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.434029   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:40.434036   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:40.434098   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:40.468173   66919 cri.go:89] found id: ""
	I0815 01:32:40.468202   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.468213   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:40.468220   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:40.468278   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:40.502701   66919 cri.go:89] found id: ""
	I0815 01:32:40.502726   66919 logs.go:276] 0 containers: []
	W0815 01:32:40.502737   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:40.502748   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:40.502772   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:40.582716   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:40.582751   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:40.582766   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:40.663875   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:40.663910   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.710394   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:40.710439   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:40.763015   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:40.763044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.276542   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:43.289311   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:43.289375   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:43.334368   66919 cri.go:89] found id: ""
	I0815 01:32:43.334398   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.334408   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:43.334416   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:43.334480   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:43.367778   66919 cri.go:89] found id: ""
	I0815 01:32:43.367810   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.367821   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:43.367829   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:43.367890   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:43.408036   66919 cri.go:89] found id: ""
	I0815 01:32:43.408060   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.408067   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:43.408072   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:43.408126   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:43.442240   66919 cri.go:89] found id: ""
	I0815 01:32:43.442264   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.442276   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:43.442282   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:43.442366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:43.475071   66919 cri.go:89] found id: ""
	I0815 01:32:43.475103   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.475113   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:43.475123   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:43.475189   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:43.508497   66919 cri.go:89] found id: ""
	I0815 01:32:43.508526   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.508536   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:43.508543   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:43.508601   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:43.544292   66919 cri.go:89] found id: ""
	I0815 01:32:43.544315   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.544322   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:43.544328   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:43.544390   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:43.582516   66919 cri.go:89] found id: ""
	I0815 01:32:43.582544   66919 logs.go:276] 0 containers: []
	W0815 01:32:43.582556   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:43.582567   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:43.582583   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:43.633821   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:43.633853   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:43.647453   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:43.647478   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:43.715818   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:43.715839   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:43.715850   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:43.798131   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:43.798167   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:40.262028   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:42.262223   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:44.760964   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:41.813262   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:43.813491   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:45.751794   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:47.751852   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:49.752186   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:46.337867   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:46.364553   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:46.364629   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:46.426611   66919 cri.go:89] found id: ""
	I0815 01:32:46.426642   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.426654   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:46.426662   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:46.426724   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:46.461160   66919 cri.go:89] found id: ""
	I0815 01:32:46.461194   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.461201   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:46.461206   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:46.461262   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:46.492542   66919 cri.go:89] found id: ""
	I0815 01:32:46.492566   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.492576   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:46.492583   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:46.492643   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:46.526035   66919 cri.go:89] found id: ""
	I0815 01:32:46.526060   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.526068   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:46.526075   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:46.526131   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:46.558867   66919 cri.go:89] found id: ""
	I0815 01:32:46.558895   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.558903   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:46.558909   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:46.558969   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:46.593215   66919 cri.go:89] found id: ""
	I0815 01:32:46.593243   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.593258   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:46.593264   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:46.593345   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:46.626683   66919 cri.go:89] found id: ""
	I0815 01:32:46.626710   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.626720   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:46.626727   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:46.626786   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:46.660687   66919 cri.go:89] found id: ""
	I0815 01:32:46.660716   66919 logs.go:276] 0 containers: []
	W0815 01:32:46.660727   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:46.660738   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:46.660754   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:46.710639   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:46.710670   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:46.723378   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:46.723402   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:46.790906   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:46.790931   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:46.790946   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:46.876843   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:46.876877   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:49.421563   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:49.434606   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:49.434688   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:49.468855   66919 cri.go:89] found id: ""
	I0815 01:32:49.468884   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.468895   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:49.468900   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:49.468958   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:49.507477   66919 cri.go:89] found id: ""
	I0815 01:32:49.507507   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.507519   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:49.507526   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:49.507586   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:49.539825   66919 cri.go:89] found id: ""
	I0815 01:32:49.539855   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.539866   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:49.539873   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:49.539925   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:49.570812   66919 cri.go:89] found id: ""
	I0815 01:32:49.570841   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.570851   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:49.570858   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:49.570910   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:49.604327   66919 cri.go:89] found id: ""
	I0815 01:32:49.604356   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.604367   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:49.604374   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:49.604456   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:49.640997   66919 cri.go:89] found id: ""
	I0815 01:32:49.641029   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.641042   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:49.641051   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:49.641116   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:49.673274   66919 cri.go:89] found id: ""
	I0815 01:32:49.673303   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.673314   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:49.673322   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:49.673381   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:49.708863   66919 cri.go:89] found id: ""
	I0815 01:32:49.708890   66919 logs.go:276] 0 containers: []
	W0815 01:32:49.708897   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:49.708905   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:49.708916   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:49.759404   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:49.759431   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:49.773401   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:49.773429   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:49.842512   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:49.842539   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:49.842557   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:49.923996   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:49.924030   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:46.760999   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:48.762058   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:46.312409   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:48.313081   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:51.752324   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:53.752358   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:52.459672   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:52.472149   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:52.472218   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:52.508168   66919 cri.go:89] found id: ""
	I0815 01:32:52.508193   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.508202   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:52.508207   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:52.508260   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:52.543741   66919 cri.go:89] found id: ""
	I0815 01:32:52.543770   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.543788   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:52.543796   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:52.543850   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:52.575833   66919 cri.go:89] found id: ""
	I0815 01:32:52.575865   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.575876   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:52.575883   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:52.575950   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:52.607593   66919 cri.go:89] found id: ""
	I0815 01:32:52.607627   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.607638   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:52.607645   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:52.607705   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:52.641726   66919 cri.go:89] found id: ""
	I0815 01:32:52.641748   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.641757   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:52.641763   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:52.641820   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:52.673891   66919 cri.go:89] found id: ""
	I0815 01:32:52.673918   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.673926   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:52.673932   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:52.673989   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:52.705405   66919 cri.go:89] found id: ""
	I0815 01:32:52.705465   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.705479   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:52.705488   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:52.705683   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:52.739413   66919 cri.go:89] found id: ""
	I0815 01:32:52.739442   66919 logs.go:276] 0 containers: []
	W0815 01:32:52.739455   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:52.739466   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:52.739481   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:52.791891   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:52.791926   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:52.806154   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:52.806184   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:52.871807   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:52.871833   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:52.871848   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:52.955257   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:52.955299   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:51.261339   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:53.760453   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:50.811954   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:52.814155   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.315451   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.753146   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:58.251418   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:55.498326   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:55.511596   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:32:55.511674   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:32:55.545372   66919 cri.go:89] found id: ""
	I0815 01:32:55.545397   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.545405   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:32:55.545410   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:32:55.545469   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:32:55.578661   66919 cri.go:89] found id: ""
	I0815 01:32:55.578687   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.578699   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:32:55.578706   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:32:55.578774   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:32:55.612071   66919 cri.go:89] found id: ""
	I0815 01:32:55.612096   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.612104   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:32:55.612109   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:32:55.612167   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:32:55.647842   66919 cri.go:89] found id: ""
	I0815 01:32:55.647870   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.647879   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:32:55.647884   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:32:55.647946   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:32:55.683145   66919 cri.go:89] found id: ""
	I0815 01:32:55.683171   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.683179   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:32:55.683185   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:32:55.683237   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:32:55.716485   66919 cri.go:89] found id: ""
	I0815 01:32:55.716513   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.716524   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:32:55.716529   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:32:55.716588   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:32:55.751649   66919 cri.go:89] found id: ""
	I0815 01:32:55.751673   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.751681   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:32:55.751689   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:32:55.751748   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:32:55.786292   66919 cri.go:89] found id: ""
	I0815 01:32:55.786322   66919 logs.go:276] 0 containers: []
	W0815 01:32:55.786333   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:32:55.786345   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:32:55.786362   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:32:55.837633   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:32:55.837680   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:32:55.851624   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:32:55.851697   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:32:55.920496   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:32:55.920518   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:32:55.920532   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:32:55.998663   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:32:55.998700   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:32:58.538202   66919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:32:58.550630   66919 kubeadm.go:597] duration metric: took 4m4.454171061s to restartPrimaryControlPlane
	W0815 01:32:58.550719   66919 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:32:58.550763   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:32:55.760913   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:57.761301   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:57.812542   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:32:59.812797   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:00.251492   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.751937   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.968200   66919 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.417406165s)
	I0815 01:33:02.968273   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:02.984328   66919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:02.994147   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:03.003703   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:03.003745   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:03.003799   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:03.012560   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:03.012629   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:03.021480   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:03.030121   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:03.030185   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:03.039216   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.047790   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:03.047854   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:03.056508   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:03.065001   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:03.065059   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:03.073818   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:03.286102   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:33:00.260884   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.261081   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:04.261431   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:02.312430   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:04.811970   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:05.252564   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:07.751944   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:09.752232   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:06.262039   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:08.760900   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:06.812188   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:08.812782   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.752403   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:14.251873   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.261490   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:13.760541   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:11.312341   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:13.313036   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:16.252242   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:18.252528   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:15.761353   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:18.261298   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:15.812234   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:17.812936   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.312284   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.752195   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:23.253836   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:20.262317   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:22.760573   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:24.760639   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:22.812596   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:25.313723   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:25.751279   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.751900   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.260523   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:29.261069   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:27.314902   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:29.812210   67000 pod_ready.go:102] pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:30.306422   67000 pod_ready.go:81] duration metric: took 4m0.000133706s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" ...
	E0815 01:33:30.306452   67000 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-sfnng" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 01:33:30.306487   67000 pod_ready.go:38] duration metric: took 4m9.54037853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:33:30.306516   67000 kubeadm.go:597] duration metric: took 4m18.620065579s to restartPrimaryControlPlane
	W0815 01:33:30.306585   67000 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:33:30.306616   67000 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:33:30.251274   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:32.251733   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:34.261342   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:31.261851   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:33.760731   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:36.752156   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:39.251042   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:35.761425   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:38.260168   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:41.252730   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:43.751914   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:40.260565   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:42.261544   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:44.263225   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:45.752581   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:48.251003   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:46.760884   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:49.259955   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:50.251655   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:52.751031   67451 pod_ready.go:102] pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:52.751064   67451 pod_ready.go:81] duration metric: took 4m0.00559932s for pod "metrics-server-6867b74b74-gdpxh" in "kube-system" namespace to be "Ready" ...
	E0815 01:33:52.751076   67451 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 01:33:52.751088   67451 pod_ready.go:38] duration metric: took 4m2.403367614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:33:52.751108   67451 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:33:52.751143   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:33:52.751205   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:33:52.795646   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:52.795671   67451 cri.go:89] found id: ""
	I0815 01:33:52.795680   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:33:52.795738   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.800301   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:33:52.800378   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:33:52.832704   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:52.832723   67451 cri.go:89] found id: ""
	I0815 01:33:52.832731   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:33:52.832789   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.836586   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:33:52.836647   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:33:52.871782   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:52.871806   67451 cri.go:89] found id: ""
	I0815 01:33:52.871814   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:33:52.871865   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.875939   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:33:52.876003   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:33:52.911531   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:52.911559   67451 cri.go:89] found id: ""
	I0815 01:33:52.911568   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:33:52.911618   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.915944   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:33:52.916044   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:33:52.950344   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:52.950370   67451 cri.go:89] found id: ""
	I0815 01:33:52.950379   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:33:52.950429   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.954361   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:33:52.954423   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:33:52.988534   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:52.988560   67451 cri.go:89] found id: ""
	I0815 01:33:52.988568   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:33:52.988614   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:52.992310   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:33:52.992362   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:33:53.024437   67451 cri.go:89] found id: ""
	I0815 01:33:53.024464   67451 logs.go:276] 0 containers: []
	W0815 01:33:53.024472   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:33:53.024477   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:33:53.024540   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:33:53.065265   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:53.065294   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:53.065300   67451 cri.go:89] found id: ""
	I0815 01:33:53.065309   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:33:53.065371   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:53.069355   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:53.073218   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:33:53.073241   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:53.111718   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:33:53.111748   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:53.168887   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:33:53.168916   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:53.205011   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:33:53.205047   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:53.236754   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:33:53.236783   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:33:53.717444   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:33:53.717479   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:33:53.730786   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:33:53.730822   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:53.772883   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:33:53.772915   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:53.811011   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:33:53.811045   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:33:53.850482   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:33:53.850537   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:53.884061   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:33:53.884094   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:33:53.953586   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:33:53.953621   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:33:54.074305   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:33:54.074345   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:51.261543   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:53.761698   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:56.568636   67000 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.261991635s)
	I0815 01:33:56.568725   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:33:56.585102   67000 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:33:56.595265   67000 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:33:56.606275   67000 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:33:56.606302   67000 kubeadm.go:157] found existing configuration files:
	
	I0815 01:33:56.606346   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:33:56.614847   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:33:56.614909   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:33:56.624087   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:33:56.635940   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:33:56.635996   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:33:56.648778   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:33:56.659984   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:33:56.660048   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:33:56.670561   67000 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:33:56.680716   67000 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:33:56.680770   67000 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:33:56.691582   67000 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:33:56.744053   67000 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:33:56.744448   67000 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:33:56.859803   67000 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:33:56.859986   67000 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:33:56.860126   67000 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:33:56.870201   67000 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:33:56.872775   67000 out.go:204]   - Generating certificates and keys ...
	I0815 01:33:56.872875   67000 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:33:56.872957   67000 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:33:56.873055   67000 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:33:56.873134   67000 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:33:56.873222   67000 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:33:56.873302   67000 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:33:56.873391   67000 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:33:56.873474   67000 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:33:56.873577   67000 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:33:56.873686   67000 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:33:56.873745   67000 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:33:56.873823   67000 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:33:56.993607   67000 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:33:57.204419   67000 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:33:57.427518   67000 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:33:57.816802   67000 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:33:57.976885   67000 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:33:57.977545   67000 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:33:57.980898   67000 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:33:56.622543   67451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:33:56.645990   67451 api_server.go:72] duration metric: took 4m13.53998694s to wait for apiserver process to appear ...
	I0815 01:33:56.646016   67451 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:33:56.646059   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:33:56.646118   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:33:56.690122   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:56.690169   67451 cri.go:89] found id: ""
	I0815 01:33:56.690180   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:33:56.690253   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.694647   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:33:56.694702   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:33:56.732231   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:56.732269   67451 cri.go:89] found id: ""
	I0815 01:33:56.732279   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:33:56.732341   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.736567   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:33:56.736642   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:33:56.776792   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:56.776816   67451 cri.go:89] found id: ""
	I0815 01:33:56.776827   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:33:56.776886   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.781131   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:33:56.781200   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:33:56.814488   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:56.814514   67451 cri.go:89] found id: ""
	I0815 01:33:56.814524   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:33:56.814598   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.818456   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:33:56.818518   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:33:56.872968   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:56.872988   67451 cri.go:89] found id: ""
	I0815 01:33:56.872998   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:33:56.873059   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.877393   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:33:56.877459   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:33:56.918072   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:56.918169   67451 cri.go:89] found id: ""
	I0815 01:33:56.918185   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:33:56.918247   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.923442   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:33:56.923508   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:33:56.960237   67451 cri.go:89] found id: ""
	I0815 01:33:56.960263   67451 logs.go:276] 0 containers: []
	W0815 01:33:56.960271   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:33:56.960276   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:33:56.960339   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:33:56.995156   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:56.995184   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:56.995189   67451 cri.go:89] found id: ""
	I0815 01:33:56.995195   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:33:56.995253   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:56.999496   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:33:57.004450   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:33:57.004478   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:33:57.082294   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:33:57.082336   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:33:57.098629   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:33:57.098662   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:33:57.132282   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:33:57.132314   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:33:57.166448   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:33:57.166482   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:33:57.198997   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:33:57.199027   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:33:57.232713   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:33:57.232746   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:33:57.684565   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:33:57.684601   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:33:57.736700   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:33:57.736734   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:33:57.847294   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:33:57.847320   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:33:57.896696   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:33:57.896725   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:33:57.940766   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:33:57.940799   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:33:57.979561   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:33:57.979586   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:33:56.260814   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:58.760911   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:33:57.982527   67000 out.go:204]   - Booting up control plane ...
	I0815 01:33:57.982632   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:33:57.982740   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:33:57.982828   67000 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:33:58.009596   67000 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:33:58.019089   67000 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:33:58.019165   67000 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:33:58.152279   67000 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:33:58.152459   67000 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:33:58.652446   67000 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.333422ms
	I0815 01:33:58.652548   67000 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:34:03.655057   67000 kubeadm.go:310] [api-check] The API server is healthy after 5.002436765s
	I0815 01:34:03.667810   67000 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:34:03.684859   67000 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:34:03.711213   67000 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:34:03.711523   67000 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-190398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:34:03.722147   67000 kubeadm.go:310] [bootstrap-token] Using token: rpl4uv.hjs6pd4939cxws48
	I0815 01:34:00.548574   67451 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8444/healthz ...
	I0815 01:34:00.554825   67451 api_server.go:279] https://192.168.39.223:8444/healthz returned 200:
	ok
	I0815 01:34:00.556191   67451 api_server.go:141] control plane version: v1.31.0
	I0815 01:34:00.556215   67451 api_server.go:131] duration metric: took 3.910191173s to wait for apiserver health ...
	I0815 01:34:00.556225   67451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:34:00.556253   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:34:00.556316   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:34:00.603377   67451 cri.go:89] found id: "9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:34:00.603404   67451 cri.go:89] found id: ""
	I0815 01:34:00.603413   67451 logs.go:276] 1 containers: [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771]
	I0815 01:34:00.603471   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.608674   67451 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:34:00.608747   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:34:00.660318   67451 cri.go:89] found id: "e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:34:00.660346   67451 cri.go:89] found id: ""
	I0815 01:34:00.660355   67451 logs.go:276] 1 containers: [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872]
	I0815 01:34:00.660450   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.664411   67451 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:34:00.664483   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:34:00.710148   67451 cri.go:89] found id: "6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:34:00.710178   67451 cri.go:89] found id: ""
	I0815 01:34:00.710188   67451 logs.go:276] 1 containers: [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b]
	I0815 01:34:00.710255   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.714877   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:34:00.714936   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:34:00.750324   67451 cri.go:89] found id: "a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:34:00.750352   67451 cri.go:89] found id: ""
	I0815 01:34:00.750361   67451 logs.go:276] 1 containers: [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0]
	I0815 01:34:00.750423   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.754304   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:34:00.754377   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:34:00.797956   67451 cri.go:89] found id: "451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:34:00.797980   67451 cri.go:89] found id: ""
	I0815 01:34:00.797989   67451 logs.go:276] 1 containers: [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6]
	I0815 01:34:00.798053   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.802260   67451 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:34:00.802362   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:34:00.841502   67451 cri.go:89] found id: "2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:34:00.841529   67451 cri.go:89] found id: ""
	I0815 01:34:00.841539   67451 logs.go:276] 1 containers: [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049]
	I0815 01:34:00.841599   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.845398   67451 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:34:00.845454   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:34:00.882732   67451 cri.go:89] found id: ""
	I0815 01:34:00.882769   67451 logs.go:276] 0 containers: []
	W0815 01:34:00.882779   67451 logs.go:278] No container was found matching "kindnet"
	I0815 01:34:00.882786   67451 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 01:34:00.882855   67451 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 01:34:00.924913   67451 cri.go:89] found id: "f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:34:00.924942   67451 cri.go:89] found id: "51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:34:00.924948   67451 cri.go:89] found id: ""
	I0815 01:34:00.924958   67451 logs.go:276] 2 containers: [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f]
	I0815 01:34:00.925019   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.929047   67451 ssh_runner.go:195] Run: which crictl
	I0815 01:34:00.932838   67451 logs.go:123] Gathering logs for kube-proxy [451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6] ...
	I0815 01:34:00.932862   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 451245c6ce8782b9e0c01c040c32fd70bcd5fb67a960cbd48b250c3090f53bb6"
	I0815 01:34:00.975515   67451 logs.go:123] Gathering logs for kube-controller-manager [2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049] ...
	I0815 01:34:00.975544   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9821e596c0d71e448bf0a018fa16bd45b662dd6d1e8c9d979bf62b6047b049"
	I0815 01:34:01.041578   67451 logs.go:123] Gathering logs for storage-provisioner [f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24] ...
	I0815 01:34:01.041616   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e16ea21684bc21fed22bfbd20c2bc0fe66d9c0aea5482963fa3fbc1497bf24"
	I0815 01:34:01.083548   67451 logs.go:123] Gathering logs for kubelet ...
	I0815 01:34:01.083584   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:34:01.181982   67451 logs.go:123] Gathering logs for dmesg ...
	I0815 01:34:01.182028   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 01:34:01.197180   67451 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:34:01.197222   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 01:34:01.296173   67451 logs.go:123] Gathering logs for kube-apiserver [9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771] ...
	I0815 01:34:01.296215   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aa794b86b77254ee6b8261178fdc6526c781cc9003aea537a9b818bd7a85771"
	I0815 01:34:01.348591   67451 logs.go:123] Gathering logs for coredns [6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b] ...
	I0815 01:34:01.348621   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6878af069904e954f4400acd80d6e108f38cc7a014f885e327c89f5b4969841b"
	I0815 01:34:01.385258   67451 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:34:01.385290   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:34:01.760172   67451 logs.go:123] Gathering logs for etcd [e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872] ...
	I0815 01:34:01.760228   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0cc07c948ffd57e0d9bf36c05998a0151fcbcb9e1253a7e690735e786b19872"
	I0815 01:34:01.811334   67451 logs.go:123] Gathering logs for kube-scheduler [a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0] ...
	I0815 01:34:01.811371   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a093f3ec7d6d1a1e8e91453b63e33c35e7bfc2cc9edcf0f7a942bfbe223d9ba0"
	I0815 01:34:01.855563   67451 logs.go:123] Gathering logs for storage-provisioner [51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f] ...
	I0815 01:34:01.855602   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d71abfa8f5cc83ef04090f9bc16d2c8f9bcc61d10749bb5b14f57c41073c5f"
	I0815 01:34:01.891834   67451 logs.go:123] Gathering logs for container status ...
	I0815 01:34:01.891871   67451 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:34:04.440542   67451 system_pods.go:59] 8 kube-system pods found
	I0815 01:34:04.440582   67451 system_pods.go:61] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running
	I0815 01:34:04.440590   67451 system_pods.go:61] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running
	I0815 01:34:04.440596   67451 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running
	I0815 01:34:04.440602   67451 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running
	I0815 01:34:04.440607   67451 system_pods.go:61] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:34:04.440612   67451 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running
	I0815 01:34:04.440622   67451 system_pods.go:61] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:04.440627   67451 system_pods.go:61] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:34:04.440636   67451 system_pods.go:74] duration metric: took 3.884405315s to wait for pod list to return data ...
	I0815 01:34:04.440643   67451 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:34:04.443705   67451 default_sa.go:45] found service account: "default"
	I0815 01:34:04.443728   67451 default_sa.go:55] duration metric: took 3.078997ms for default service account to be created ...
	I0815 01:34:04.443736   67451 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:34:04.451338   67451 system_pods.go:86] 8 kube-system pods found
	I0815 01:34:04.451370   67451 system_pods.go:89] "coredns-6f6b679f8f-gxdqt" [2d8541f1-a07e-4d34-80ae-f7b2529b560b] Running
	I0815 01:34:04.451379   67451 system_pods.go:89] "etcd-default-k8s-diff-port-018537" [c6623ba4-6b48-4c68-a589-16f47114ddf6] Running
	I0815 01:34:04.451386   67451 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-018537" [3e22a604-e723-45ce-b334-9aad3941655c] Running
	I0815 01:34:04.451394   67451 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-018537" [fe5954cb-1850-4196-b7de-788ba64e9373] Running
	I0815 01:34:04.451401   67451 system_pods.go:89] "kube-proxy-s8mfb" [6897db99-a461-4261-a7b4-17f13c72a724] Running
	I0815 01:34:04.451408   67451 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-018537" [9d0387a7-8438-4170-98a0-af3dbf2ed8cc] Running
	I0815 01:34:04.451419   67451 system_pods.go:89] "metrics-server-6867b74b74-gdpxh" [e263386d-fda4-4841-ace9-81a1ba4e8a81] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:04.451430   67451 system_pods.go:89] "storage-provisioner" [d5929cbb-30bf-4ce8-bd14-7e687e83492b] Running
	I0815 01:34:04.451443   67451 system_pods.go:126] duration metric: took 7.701241ms to wait for k8s-apps to be running ...
	I0815 01:34:04.451455   67451 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:34:04.451507   67451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:04.468766   67451 system_svc.go:56] duration metric: took 17.300221ms WaitForService to wait for kubelet
	I0815 01:34:04.468801   67451 kubeadm.go:582] duration metric: took 4m21.362801315s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:34:04.468832   67451 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:34:04.472507   67451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:34:04.472531   67451 node_conditions.go:123] node cpu capacity is 2
	I0815 01:34:04.472542   67451 node_conditions.go:105] duration metric: took 3.704147ms to run NodePressure ...
	I0815 01:34:04.472565   67451 start.go:241] waiting for startup goroutines ...
	I0815 01:34:04.472575   67451 start.go:246] waiting for cluster config update ...
	I0815 01:34:04.472588   67451 start.go:255] writing updated cluster config ...
	I0815 01:34:04.472865   67451 ssh_runner.go:195] Run: rm -f paused
	I0815 01:34:04.527726   67451 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:34:04.529173   67451 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-018537" cluster and "default" namespace by default
	I0815 01:34:03.723380   67000 out.go:204]   - Configuring RBAC rules ...
	I0815 01:34:03.723547   67000 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:34:03.729240   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:34:03.737279   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:34:03.740490   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:34:03.747717   67000 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:34:03.751107   67000 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:34:04.063063   67000 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:34:04.490218   67000 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:34:05.062068   67000 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:34:05.065926   67000 kubeadm.go:310] 
	I0815 01:34:05.065991   67000 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:34:05.066017   67000 kubeadm.go:310] 
	I0815 01:34:05.066103   67000 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:34:05.066114   67000 kubeadm.go:310] 
	I0815 01:34:05.066148   67000 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:34:05.066211   67000 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:34:05.066286   67000 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:34:05.066298   67000 kubeadm.go:310] 
	I0815 01:34:05.066368   67000 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:34:05.066377   67000 kubeadm.go:310] 
	I0815 01:34:05.066416   67000 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:34:05.066423   67000 kubeadm.go:310] 
	I0815 01:34:05.066499   67000 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:34:05.066602   67000 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:34:05.066692   67000 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:34:05.066699   67000 kubeadm.go:310] 
	I0815 01:34:05.066766   67000 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:34:05.066829   67000 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:34:05.066835   67000 kubeadm.go:310] 
	I0815 01:34:05.066958   67000 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rpl4uv.hjs6pd4939cxws48 \
	I0815 01:34:05.067094   67000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:34:05.067122   67000 kubeadm.go:310] 	--control-plane 
	I0815 01:34:05.067130   67000 kubeadm.go:310] 
	I0815 01:34:05.067246   67000 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:34:05.067257   67000 kubeadm.go:310] 
	I0815 01:34:05.067360   67000 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rpl4uv.hjs6pd4939cxws48 \
	I0815 01:34:05.067496   67000 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:34:05.068747   67000 kubeadm.go:310] W0815 01:33:56.716635    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:05.069045   67000 kubeadm.go:310] W0815 01:33:56.717863    2551 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:05.069191   67000 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:05.069220   67000 cni.go:84] Creating CNI manager for ""
	I0815 01:34:05.069231   67000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:34:05.070969   67000 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:34:00.761976   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:03.263360   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:05.072063   67000 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:34:05.081962   67000 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:34:05.106105   67000 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:34:05.106173   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:05.106224   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-190398 minikube.k8s.io/updated_at=2024_08_15T01_34_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=embed-certs-190398 minikube.k8s.io/primary=true
	I0815 01:34:05.282543   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:05.282564   67000 ops.go:34] apiserver oom_adj: -16
	I0815 01:34:05.783320   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:06.282990   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:06.782692   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:07.283083   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:07.783174   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:08.283580   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:08.783293   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:09.282718   67000 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:09.384394   67000 kubeadm.go:1113] duration metric: took 4.278268585s to wait for elevateKubeSystemPrivileges
	I0815 01:34:09.384433   67000 kubeadm.go:394] duration metric: took 4m57.749730888s to StartCluster
	I0815 01:34:09.384454   67000 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:09.384550   67000 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:34:09.386694   67000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:09.386961   67000 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.151 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:34:09.387019   67000 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:34:09.387099   67000 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-190398"
	I0815 01:34:09.387109   67000 addons.go:69] Setting default-storageclass=true in profile "embed-certs-190398"
	I0815 01:34:09.387133   67000 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-190398"
	I0815 01:34:09.387144   67000 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-190398"
	W0815 01:34:09.387147   67000 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:34:09.387165   67000 addons.go:69] Setting metrics-server=true in profile "embed-certs-190398"
	I0815 01:34:09.387178   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.387189   67000 config.go:182] Loaded profile config "embed-certs-190398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:34:09.387205   67000 addons.go:234] Setting addon metrics-server=true in "embed-certs-190398"
	W0815 01:34:09.387216   67000 addons.go:243] addon metrics-server should already be in state true
	I0815 01:34:09.387253   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.387571   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387601   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.387577   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387681   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.387729   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.387799   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.388556   67000 out.go:177] * Verifying Kubernetes components...
	I0815 01:34:09.389872   67000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:34:09.404358   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0815 01:34:09.404925   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.405016   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0815 01:34:09.405505   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.405526   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.405530   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.405878   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.405982   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.405993   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.406352   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.406418   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I0815 01:34:09.406460   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.406477   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.406755   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.406839   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.406876   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.407171   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.407189   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.407518   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.407712   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.412572   67000 addons.go:234] Setting addon default-storageclass=true in "embed-certs-190398"
	W0815 01:34:09.412597   67000 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:34:09.412626   67000 host.go:66] Checking if "embed-certs-190398" exists ...
	I0815 01:34:09.413018   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.413049   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.427598   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0815 01:34:09.428087   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.428619   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.428645   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.429079   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.429290   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.430391   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34763
	I0815 01:34:09.430978   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.431199   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.431477   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.431489   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.431839   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.431991   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.433073   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0815 01:34:09.433473   67000 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:34:09.433726   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.433849   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.434259   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.434433   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.434786   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.434987   67000 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:09.435005   67000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:34:09.435026   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.435675   67000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:09.435700   67000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:09.435887   67000 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:34:05.760130   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:07.760774   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:09.762245   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:09.437621   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:34:09.437643   67000 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:34:09.437664   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.438723   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.439409   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.439431   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.439685   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.439898   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.440245   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.440419   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.440609   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.441353   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.441380   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.441558   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.441712   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.441859   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.441957   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.455864   67000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
	I0815 01:34:09.456238   67000 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:09.456858   67000 main.go:141] libmachine: Using API Version  1
	I0815 01:34:09.456878   67000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:09.457179   67000 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:09.457413   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetState
	I0815 01:34:09.459002   67000 main.go:141] libmachine: (embed-certs-190398) Calling .DriverName
	I0815 01:34:09.459268   67000 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:09.459282   67000 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:34:09.459296   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHHostname
	I0815 01:34:09.461784   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.462170   67000 main.go:141] libmachine: (embed-certs-190398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:91:1a", ip: ""} in network mk-embed-certs-190398: {Iface:virbr4 ExpiryTime:2024-08-15 02:28:57 +0000 UTC Type:0 Mac:52:54:00:5a:91:1a Iaid: IPaddr:192.168.72.151 Prefix:24 Hostname:embed-certs-190398 Clientid:01:52:54:00:5a:91:1a}
	I0815 01:34:09.462203   67000 main.go:141] libmachine: (embed-certs-190398) DBG | domain embed-certs-190398 has defined IP address 192.168.72.151 and MAC address 52:54:00:5a:91:1a in network mk-embed-certs-190398
	I0815 01:34:09.462317   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHPort
	I0815 01:34:09.462491   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHKeyPath
	I0815 01:34:09.462631   67000 main.go:141] libmachine: (embed-certs-190398) Calling .GetSSHUsername
	I0815 01:34:09.462772   67000 sshutil.go:53] new ssh client: &{IP:192.168.72.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/embed-certs-190398/id_rsa Username:docker}
	I0815 01:34:09.602215   67000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:34:09.621687   67000 node_ready.go:35] waiting up to 6m0s for node "embed-certs-190398" to be "Ready" ...
	I0815 01:34:09.635114   67000 node_ready.go:49] node "embed-certs-190398" has status "Ready":"True"
	I0815 01:34:09.635146   67000 node_ready.go:38] duration metric: took 13.422205ms for node "embed-certs-190398" to be "Ready" ...
	I0815 01:34:09.635169   67000 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:09.642293   67000 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:09.681219   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:34:09.681242   67000 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:34:09.725319   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:34:09.725353   67000 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:34:09.725445   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:09.758901   67000 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:09.758973   67000 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:34:09.809707   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:09.831765   67000 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:10.013580   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.013607   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.013902   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:10.013933   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.013950   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.013968   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.013979   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.014212   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.014227   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.023286   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:10.023325   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:10.023618   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:10.023643   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:10.023655   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.121834   67000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.312088989s)
	I0815 01:34:11.121883   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.121896   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.122269   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.122304   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.122324   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.122340   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.122354   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.122588   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.122605   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.183170   67000 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.351356186s)
	I0815 01:34:11.183232   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.183248   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.183588   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.183604   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.183608   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.183619   67000 main.go:141] libmachine: Making call to close driver server
	I0815 01:34:11.183627   67000 main.go:141] libmachine: (embed-certs-190398) Calling .Close
	I0815 01:34:11.183989   67000 main.go:141] libmachine: (embed-certs-190398) DBG | Closing plugin on server side
	I0815 01:34:11.184017   67000 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:34:11.184031   67000 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:34:11.184053   67000 addons.go:475] Verifying addon metrics-server=true in "embed-certs-190398"
	I0815 01:34:11.186460   67000 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0815 01:34:12.261636   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.763849   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:11.187572   67000 addons.go:510] duration metric: took 1.800554463s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0815 01:34:11.653997   67000 pod_ready.go:102] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.149672   67000 pod_ready.go:102] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:14.652753   67000 pod_ready.go:92] pod "etcd-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:14.652782   67000 pod_ready.go:81] duration metric: took 5.0104594s for pod "etcd-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:14.652794   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:16.662387   67000 pod_ready.go:102] pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:17.158847   67000 pod_ready.go:92] pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.158877   67000 pod_ready.go:81] duration metric: took 2.50607523s for pod "kube-apiserver-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.158895   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.163274   67000 pod_ready.go:92] pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.163295   67000 pod_ready.go:81] duration metric: took 4.392165ms for pod "kube-controller-manager-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.163307   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7hfvr" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.167416   67000 pod_ready.go:92] pod "kube-proxy-7hfvr" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.167436   67000 pod_ready.go:81] duration metric: took 4.122023ms for pod "kube-proxy-7hfvr" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.167447   67000 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.171559   67000 pod_ready.go:92] pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace has status "Ready":"True"
	I0815 01:34:17.171578   67000 pod_ready.go:81] duration metric: took 4.12361ms for pod "kube-scheduler-embed-certs-190398" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:17.171587   67000 pod_ready.go:38] duration metric: took 7.536405023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:17.171605   67000 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:34:17.171665   67000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:34:17.187336   67000 api_server.go:72] duration metric: took 7.800338922s to wait for apiserver process to appear ...
	I0815 01:34:17.187359   67000 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:34:17.187379   67000 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0815 01:34:17.191804   67000 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0815 01:34:17.192705   67000 api_server.go:141] control plane version: v1.31.0
	I0815 01:34:17.192726   67000 api_server.go:131] duration metric: took 5.35969ms to wait for apiserver health ...
	I0815 01:34:17.192739   67000 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:34:17.197588   67000 system_pods.go:59] 9 kube-system pods found
	I0815 01:34:17.197618   67000 system_pods.go:61] "coredns-6f6b679f8f-kmmdc" [455019d9-07b5-418e-8668-26272424e96c] Running
	I0815 01:34:17.197626   67000 system_pods.go:61] "coredns-6f6b679f8f-kx2xv" [81e26858-a527-4f0d-a7fd-e5c3f82b29bc] Running
	I0815 01:34:17.197632   67000 system_pods.go:61] "etcd-embed-certs-190398" [0767f386-4cff-4c02-9c5c-ec334dd15d3d] Running
	I0815 01:34:17.197638   67000 system_pods.go:61] "kube-apiserver-embed-certs-190398" [737db54b-50eb-4fea-93a0-7e95d645b77f] Running
	I0815 01:34:17.197644   67000 system_pods.go:61] "kube-controller-manager-embed-certs-190398" [4767eb26-47a6-4dfd-833a-a4e18a57cb7e] Running
	I0815 01:34:17.197649   67000 system_pods.go:61] "kube-proxy-7hfvr" [ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0] Running
	I0815 01:34:17.197655   67000 system_pods.go:61] "kube-scheduler-embed-certs-190398" [0ffcf10e-304e-4837-bd6f-c3b78193b378] Running
	I0815 01:34:17.197665   67000 system_pods.go:61] "metrics-server-6867b74b74-4ldv7" [ea1c5492-373d-445c-a135-b91569186449] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:17.197676   67000 system_pods.go:61] "storage-provisioner" [002656ed-b542-442d-9409-6f0b5cf557dc] Running
	I0815 01:34:17.197688   67000 system_pods.go:74] duration metric: took 4.940904ms to wait for pod list to return data ...
	I0815 01:34:17.197699   67000 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:34:17.200172   67000 default_sa.go:45] found service account: "default"
	I0815 01:34:17.200190   67000 default_sa.go:55] duration metric: took 2.484111ms for default service account to be created ...
	I0815 01:34:17.200198   67000 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:34:17.359981   67000 system_pods.go:86] 9 kube-system pods found
	I0815 01:34:17.360011   67000 system_pods.go:89] "coredns-6f6b679f8f-kmmdc" [455019d9-07b5-418e-8668-26272424e96c] Running
	I0815 01:34:17.360019   67000 system_pods.go:89] "coredns-6f6b679f8f-kx2xv" [81e26858-a527-4f0d-a7fd-e5c3f82b29bc] Running
	I0815 01:34:17.360025   67000 system_pods.go:89] "etcd-embed-certs-190398" [0767f386-4cff-4c02-9c5c-ec334dd15d3d] Running
	I0815 01:34:17.360030   67000 system_pods.go:89] "kube-apiserver-embed-certs-190398" [737db54b-50eb-4fea-93a0-7e95d645b77f] Running
	I0815 01:34:17.360036   67000 system_pods.go:89] "kube-controller-manager-embed-certs-190398" [4767eb26-47a6-4dfd-833a-a4e18a57cb7e] Running
	I0815 01:34:17.360042   67000 system_pods.go:89] "kube-proxy-7hfvr" [ac963f25-9c0b-4b39-8bce-f0a16a6ab7e0] Running
	I0815 01:34:17.360047   67000 system_pods.go:89] "kube-scheduler-embed-certs-190398" [0ffcf10e-304e-4837-bd6f-c3b78193b378] Running
	I0815 01:34:17.360058   67000 system_pods.go:89] "metrics-server-6867b74b74-4ldv7" [ea1c5492-373d-445c-a135-b91569186449] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:34:17.360065   67000 system_pods.go:89] "storage-provisioner" [002656ed-b542-442d-9409-6f0b5cf557dc] Running
	I0815 01:34:17.360078   67000 system_pods.go:126] duration metric: took 159.873802ms to wait for k8s-apps to be running ...
	I0815 01:34:17.360091   67000 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:34:17.360143   67000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:17.374912   67000 system_svc.go:56] duration metric: took 14.811351ms WaitForService to wait for kubelet
	I0815 01:34:17.374948   67000 kubeadm.go:582] duration metric: took 7.987952187s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:34:17.374977   67000 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:34:17.557650   67000 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:34:17.557681   67000 node_conditions.go:123] node cpu capacity is 2
	I0815 01:34:17.557694   67000 node_conditions.go:105] duration metric: took 182.710819ms to run NodePressure ...
	I0815 01:34:17.557706   67000 start.go:241] waiting for startup goroutines ...
	I0815 01:34:17.557716   67000 start.go:246] waiting for cluster config update ...
	I0815 01:34:17.557728   67000 start.go:255] writing updated cluster config ...
	I0815 01:34:17.557999   67000 ssh_runner.go:195] Run: rm -f paused
	I0815 01:34:17.605428   67000 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:34:17.607344   67000 out.go:177] * Done! kubectl is now configured to use "embed-certs-190398" cluster and "default" namespace by default
	I0815 01:34:17.260406   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:19.260601   66492 pod_ready.go:102] pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace has status "Ready":"False"
	I0815 01:34:19.754935   66492 pod_ready.go:81] duration metric: took 4m0.000339545s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" ...
	E0815 01:34:19.754964   66492 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qnnqs" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 01:34:19.754984   66492 pod_ready.go:38] duration metric: took 4m6.506948914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:19.755018   66492 kubeadm.go:597] duration metric: took 4m13.922875877s to restartPrimaryControlPlane
	W0815 01:34:19.755082   66492 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 01:34:19.755112   66492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:34:45.859009   66492 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.103872856s)
	I0815 01:34:45.859088   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:34:45.875533   66492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 01:34:45.885287   66492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:34:45.897067   66492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:34:45.897087   66492 kubeadm.go:157] found existing configuration files:
	
	I0815 01:34:45.897137   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:34:45.907073   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:34:45.907145   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:34:45.916110   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:34:45.925269   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:34:45.925330   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:34:45.934177   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:34:45.942464   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:34:45.942524   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:34:45.951504   66492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:34:45.961107   66492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:34:45.961159   66492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:34:45.970505   66492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:34:46.018530   66492 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 01:34:46.018721   66492 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:46.125710   66492 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:46.125846   66492 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:46.125961   66492 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 01:34:46.134089   66492 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:46.135965   66492 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:46.136069   66492 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:46.136157   66492 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:46.136256   66492 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:46.136333   66492 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:46.136442   66492 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:46.136528   66492 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:46.136614   66492 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:46.136736   66492 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:46.136845   66492 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:46.136946   66492 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:46.137066   66492 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:46.137143   66492 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:46.289372   66492 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:46.547577   66492 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 01:34:46.679039   66492 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:47.039625   66492 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:47.355987   66492 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:47.356514   66492 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:47.359155   66492 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:47.360813   66492 out.go:204]   - Booting up control plane ...
	I0815 01:34:47.360924   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:47.361018   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:47.361140   66492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:47.386603   66492 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:47.395339   66492 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:47.395391   66492 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:47.526381   66492 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 01:34:47.526512   66492 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 01:34:48.027552   66492 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.152677ms
	I0815 01:34:48.027674   66492 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 01:34:53.029526   66492 kubeadm.go:310] [api-check] The API server is healthy after 5.001814093s
	I0815 01:34:53.043123   66492 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 01:34:53.061171   66492 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 01:34:53.093418   66492 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 01:34:53.093680   66492 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-884893 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 01:34:53.106103   66492 kubeadm.go:310] [bootstrap-token] Using token: rd520d.rc6325cjita43il4
	I0815 01:34:53.107576   66492 out.go:204]   - Configuring RBAC rules ...
	I0815 01:34:53.107717   66492 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 01:34:53.112060   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 01:34:53.122816   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 01:34:53.126197   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 01:34:53.129304   66492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 01:34:53.133101   66492 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 01:34:53.436427   66492 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 01:34:53.891110   66492 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 01:34:54.439955   66492 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 01:34:54.441369   66492 kubeadm.go:310] 
	I0815 01:34:54.441448   66492 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 01:34:54.441457   66492 kubeadm.go:310] 
	I0815 01:34:54.441550   66492 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 01:34:54.441578   66492 kubeadm.go:310] 
	I0815 01:34:54.441608   66492 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 01:34:54.441663   66492 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 01:34:54.441705   66492 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 01:34:54.441711   66492 kubeadm.go:310] 
	I0815 01:34:54.441777   66492 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 01:34:54.441784   66492 kubeadm.go:310] 
	I0815 01:34:54.441821   66492 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 01:34:54.441828   66492 kubeadm.go:310] 
	I0815 01:34:54.441867   66492 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 01:34:54.441977   66492 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 01:34:54.442054   66492 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 01:34:54.442061   66492 kubeadm.go:310] 
	I0815 01:34:54.442149   66492 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 01:34:54.442255   66492 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 01:34:54.442265   66492 kubeadm.go:310] 
	I0815 01:34:54.442384   66492 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rd520d.rc6325cjita43il4 \
	I0815 01:34:54.442477   66492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c \
	I0815 01:34:54.442504   66492 kubeadm.go:310] 	--control-plane 
	I0815 01:34:54.442509   66492 kubeadm.go:310] 
	I0815 01:34:54.442591   66492 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 01:34:54.442598   66492 kubeadm.go:310] 
	I0815 01:34:54.442675   66492 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rd520d.rc6325cjita43il4 \
	I0815 01:34:54.442811   66492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9c3333a05f786e7b5226cc63b3a8bbaccfa841c41478bf3ea2d20f1dd4fd4e5c 
	I0815 01:34:54.444409   66492 kubeadm.go:310] W0815 01:34:45.989583    3035 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:54.444785   66492 kubeadm.go:310] W0815 01:34:45.990491    3035 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 01:34:54.444929   66492 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:34:54.444951   66492 cni.go:84] Creating CNI manager for ""
	I0815 01:34:54.444960   66492 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 01:34:54.447029   66492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 01:34:54.448357   66492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 01:34:54.460176   66492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 01:34:54.479219   66492 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 01:34:54.479299   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:54.479342   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-884893 minikube.k8s.io/updated_at=2024_08_15T01_34_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a560a51f794134545edbbeb49e1ab4a0b1355168 minikube.k8s.io/name=no-preload-884893 minikube.k8s.io/primary=true
	I0815 01:34:54.516528   66492 ops.go:34] apiserver oom_adj: -16
	I0815 01:34:54.686689   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:55.186918   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:55.687118   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:56.186740   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:56.687051   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:57.187582   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:57.687662   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:58.187633   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:58.686885   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:59.187093   66492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 01:34:59.280930   66492 kubeadm.go:1113] duration metric: took 4.801695567s to wait for elevateKubeSystemPrivileges
	I0815 01:34:59.280969   66492 kubeadm.go:394] duration metric: took 4m53.494095639s to StartCluster
	I0815 01:34:59.281006   66492 settings.go:142] acquiring lock: {Name:mk3294f55e319a5208d297e21a84a1d5a3cea134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:59.281099   66492 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:34:59.283217   66492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19443-13088/kubeconfig: {Name:mkccb16425d0a43eb586aa8069575d7bc572ddc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 01:34:59.283528   66492 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 01:34:59.283693   66492 config.go:182] Loaded profile config "no-preload-884893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:34:59.283649   66492 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 01:34:59.283734   66492 addons.go:69] Setting storage-provisioner=true in profile "no-preload-884893"
	I0815 01:34:59.283743   66492 addons.go:69] Setting metrics-server=true in profile "no-preload-884893"
	I0815 01:34:59.283742   66492 addons.go:69] Setting default-storageclass=true in profile "no-preload-884893"
	I0815 01:34:59.283768   66492 addons.go:234] Setting addon metrics-server=true in "no-preload-884893"
	I0815 01:34:59.283770   66492 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-884893"
	I0815 01:34:59.283768   66492 addons.go:234] Setting addon storage-provisioner=true in "no-preload-884893"
	W0815 01:34:59.283882   66492 addons.go:243] addon storage-provisioner should already be in state true
	I0815 01:34:59.283912   66492 host.go:66] Checking if "no-preload-884893" exists ...
	W0815 01:34:59.283778   66492 addons.go:243] addon metrics-server should already be in state true
	I0815 01:34:59.283990   66492 host.go:66] Checking if "no-preload-884893" exists ...
	I0815 01:34:59.284206   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284238   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.284296   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284321   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.284333   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.284347   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.285008   66492 out.go:177] * Verifying Kubernetes components...
	I0815 01:34:59.286336   66492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 01:34:59.302646   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I0815 01:34:59.302810   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0815 01:34:59.303084   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303243   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303327   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0815 01:34:59.303705   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.303724   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.303864   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.303911   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.303939   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.304044   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304378   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.304397   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.304418   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304643   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.304695   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.304899   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.304912   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.304926   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.305098   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.308826   66492 addons.go:234] Setting addon default-storageclass=true in "no-preload-884893"
	W0815 01:34:59.308848   66492 addons.go:243] addon default-storageclass should already be in state true
	I0815 01:34:59.308878   66492 host.go:66] Checking if "no-preload-884893" exists ...
	I0815 01:34:59.309223   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.309255   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.320605   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0815 01:34:59.321021   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.321570   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.321591   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.321942   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.322163   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.323439   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0815 01:34:59.323779   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.324027   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.324168   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.324180   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.324446   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.324885   66492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 01:34:59.324914   66492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 01:34:59.325881   66492 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 01:34:59.326695   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0815 01:34:59.327054   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.327257   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 01:34:59.327286   66492 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 01:34:59.327304   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.327551   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.327567   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.327935   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.328243   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.330384   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.330975   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.331491   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.331519   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.331747   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.331916   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.331916   66492 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 01:34:59.563745   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:34:59.563904   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:34:59.565631   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:34:59.565711   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:34:59.565827   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:34:59.565968   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:34:59.566095   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:34:59.566195   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:34:59.567850   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:34:59.567922   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:34:59.567991   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:34:59.568091   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:34:59.568176   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:34:59.568283   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:34:59.568377   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:34:59.568466   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:34:59.568558   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:34:59.568674   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:34:59.568775   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:34:59.568834   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:34:59.568920   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:34:59.568998   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:34:59.569073   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:34:59.569162   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:34:59.569217   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:34:59.569330   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:34:59.569429   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:34:59.569482   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:34:59.569580   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:34:59.571031   66919 out.go:204]   - Booting up control plane ...
	I0815 01:34:59.571120   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:34:59.571198   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:34:59.571286   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:34:59.571396   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:34:59.571643   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:34:59.571729   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:34:59.571830   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572069   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572172   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572422   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572540   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.572814   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.572913   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573155   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573252   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:34:59.573474   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:34:59.573484   66919 kubeadm.go:310] 
	I0815 01:34:59.573543   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:34:59.573601   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:34:59.573610   66919 kubeadm.go:310] 
	I0815 01:34:59.573667   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:34:59.573713   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:34:59.573862   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:34:59.573878   66919 kubeadm.go:310] 
	I0815 01:34:59.574000   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:34:59.574051   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:34:59.574099   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:34:59.574109   66919 kubeadm.go:310] 
	I0815 01:34:59.574262   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:34:59.574379   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:34:59.574387   66919 kubeadm.go:310] 
	I0815 01:34:59.574509   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:34:59.574646   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:34:59.574760   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:34:59.574862   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:34:59.574880   66919 kubeadm.go:310] 
	W0815 01:34:59.574991   66919 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 01:34:59.575044   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 01:35:00.029701   66919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:00.047125   66919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 01:35:00.057309   66919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 01:35:00.057336   66919 kubeadm.go:157] found existing configuration files:
	
	I0815 01:35:00.057396   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 01:35:00.066837   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 01:35:00.066901   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 01:35:00.076722   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 01:35:00.086798   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 01:35:00.086862   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 01:35:00.097486   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.109900   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 01:35:00.109981   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 01:35:00.122672   66919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 01:34:59.332080   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.332258   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.333212   66492 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:59.333230   66492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 01:34:59.333246   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.336201   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.336699   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.336761   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.336791   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.336965   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.337146   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.337319   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.343978   66492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42433
	I0815 01:34:59.344425   66492 main.go:141] libmachine: () Calling .GetVersion
	I0815 01:34:59.344992   66492 main.go:141] libmachine: Using API Version  1
	I0815 01:34:59.345015   66492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 01:34:59.345400   66492 main.go:141] libmachine: () Calling .GetMachineName
	I0815 01:34:59.345595   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetState
	I0815 01:34:59.347262   66492 main.go:141] libmachine: (no-preload-884893) Calling .DriverName
	I0815 01:34:59.347490   66492 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:59.347507   66492 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 01:34:59.347525   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHHostname
	I0815 01:34:59.350390   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.350876   66492 main.go:141] libmachine: (no-preload-884893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:93:c6", ip: ""} in network mk-no-preload-884893: {Iface:virbr3 ExpiryTime:2024-08-15 02:29:38 +0000 UTC Type:0 Mac:52:54:00:b7:93:c6 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:no-preload-884893 Clientid:01:52:54:00:b7:93:c6}
	I0815 01:34:59.350899   66492 main.go:141] libmachine: (no-preload-884893) DBG | domain no-preload-884893 has defined IP address 192.168.61.166 and MAC address 52:54:00:b7:93:c6 in network mk-no-preload-884893
	I0815 01:34:59.351072   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHPort
	I0815 01:34:59.351243   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHKeyPath
	I0815 01:34:59.351418   66492 main.go:141] libmachine: (no-preload-884893) Calling .GetSSHUsername
	I0815 01:34:59.351543   66492 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/no-preload-884893/id_rsa Username:docker}
	I0815 01:34:59.471077   66492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 01:34:59.500097   66492 node_ready.go:35] waiting up to 6m0s for node "no-preload-884893" to be "Ready" ...
	I0815 01:34:59.509040   66492 node_ready.go:49] node "no-preload-884893" has status "Ready":"True"
	I0815 01:34:59.509063   66492 node_ready.go:38] duration metric: took 8.924177ms for node "no-preload-884893" to be "Ready" ...
	I0815 01:34:59.509075   66492 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:34:59.515979   66492 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace to be "Ready" ...
	I0815 01:34:59.594834   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 01:34:59.594856   66492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 01:34:59.597457   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 01:34:59.603544   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 01:34:59.637080   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 01:34:59.637109   66492 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 01:34:59.683359   66492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:34:59.683388   66492 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 01:34:59.730096   66492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 01:35:00.403252   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403287   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403477   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403495   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403789   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.403829   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.403850   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403858   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.403868   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.403876   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.403891   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.403900   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.404115   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.404156   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.404158   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.404162   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.404177   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.404164   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.433823   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.433876   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.434285   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.434398   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.434420   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.674979   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.675008   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.675371   66492 main.go:141] libmachine: (no-preload-884893) DBG | Closing plugin on server side
	I0815 01:35:00.675395   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.675421   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.675434   66492 main.go:141] libmachine: Making call to close driver server
	I0815 01:35:00.675443   66492 main.go:141] libmachine: (no-preload-884893) Calling .Close
	I0815 01:35:00.675706   66492 main.go:141] libmachine: Successfully made call to close driver server
	I0815 01:35:00.675722   66492 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 01:35:00.675733   66492 addons.go:475] Verifying addon metrics-server=true in "no-preload-884893"
	I0815 01:35:00.677025   66492 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 01:35:00.134512   66919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 01:35:00.134579   66919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 01:35:00.146901   66919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 01:35:00.384725   66919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 01:35:00.678492   66492 addons.go:510] duration metric: took 1.394848534s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 01:35:01.522738   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:04.022711   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:06.522906   66492 pod_ready.go:102] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"False"
	I0815 01:35:08.523426   66492 pod_ready.go:92] pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.523453   66492 pod_ready.go:81] duration metric: took 9.007444319s for pod "coredns-6f6b679f8f-srq48" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.523465   66492 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.528447   66492 pod_ready.go:92] pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.528471   66492 pod_ready.go:81] duration metric: took 4.997645ms for pod "coredns-6f6b679f8f-t77b6" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.528480   66492 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.533058   66492 pod_ready.go:92] pod "etcd-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.533078   66492 pod_ready.go:81] duration metric: took 4.59242ms for pod "etcd-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.533088   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.537231   66492 pod_ready.go:92] pod "kube-apiserver-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.537252   66492 pod_ready.go:81] duration metric: took 4.154988ms for pod "kube-apiserver-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.537261   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.541819   66492 pod_ready.go:92] pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.541840   66492 pod_ready.go:81] duration metric: took 4.572636ms for pod "kube-controller-manager-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.541852   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dpggv" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.920356   66492 pod_ready.go:92] pod "kube-proxy-dpggv" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:08.920394   66492 pod_ready.go:81] duration metric: took 378.534331ms for pod "kube-proxy-dpggv" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:08.920407   66492 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:09.320112   66492 pod_ready.go:92] pod "kube-scheduler-no-preload-884893" in "kube-system" namespace has status "Ready":"True"
	I0815 01:35:09.320135   66492 pod_ready.go:81] duration metric: took 399.72085ms for pod "kube-scheduler-no-preload-884893" in "kube-system" namespace to be "Ready" ...
	I0815 01:35:09.320143   66492 pod_ready.go:38] duration metric: took 9.811056504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 01:35:09.320158   66492 api_server.go:52] waiting for apiserver process to appear ...
	I0815 01:35:09.320216   66492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 01:35:09.336727   66492 api_server.go:72] duration metric: took 10.053160882s to wait for apiserver process to appear ...
	I0815 01:35:09.336760   66492 api_server.go:88] waiting for apiserver healthz status ...
	I0815 01:35:09.336777   66492 api_server.go:253] Checking apiserver healthz at https://192.168.61.166:8443/healthz ...
	I0815 01:35:09.340897   66492 api_server.go:279] https://192.168.61.166:8443/healthz returned 200:
	ok
	I0815 01:35:09.341891   66492 api_server.go:141] control plane version: v1.31.0
	I0815 01:35:09.341911   66492 api_server.go:131] duration metric: took 5.145922ms to wait for apiserver health ...
	I0815 01:35:09.341919   66492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 01:35:09.523808   66492 system_pods.go:59] 9 kube-system pods found
	I0815 01:35:09.523839   66492 system_pods.go:61] "coredns-6f6b679f8f-srq48" [e9520ab8-24d6-410d-bcba-b59e91e817a9] Running
	I0815 01:35:09.523844   66492 system_pods.go:61] "coredns-6f6b679f8f-t77b6" [fcdf11ef-28a6-428c-b033-e29b51af8f0e] Running
	I0815 01:35:09.523848   66492 system_pods.go:61] "etcd-no-preload-884893" [fa960cfe-331d-4656-93e9-a58921bd62de] Running
	I0815 01:35:09.523851   66492 system_pods.go:61] "kube-apiserver-no-preload-884893" [7a8244fb-aa58-4e8e-957a-f3fbd388837b] Running
	I0815 01:35:09.523857   66492 system_pods.go:61] "kube-controller-manager-no-preload-884893" [0b6c5424-6fe4-42b6-b081-4409f90db35f] Running
	I0815 01:35:09.523860   66492 system_pods.go:61] "kube-proxy-dpggv" [55ef2a4b-a502-452d-a3bd-df1209ff247b] Running
	I0815 01:35:09.523863   66492 system_pods.go:61] "kube-scheduler-no-preload-884893" [cd295ee0-1897-4cd3-896d-09dd36842248] Running
	I0815 01:35:09.523871   66492 system_pods.go:61] "metrics-server-6867b74b74-w47b2" [7423be62-ae01-4b3f-9e24-049f4788f32f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:35:09.523875   66492 system_pods.go:61] "storage-provisioner" [b4cf6d02-281f-4fb5-9ff7-c36143d3af58] Running
	I0815 01:35:09.523883   66492 system_pods.go:74] duration metric: took 181.959474ms to wait for pod list to return data ...
	I0815 01:35:09.523892   66492 default_sa.go:34] waiting for default service account to be created ...
	I0815 01:35:09.720531   66492 default_sa.go:45] found service account: "default"
	I0815 01:35:09.720565   66492 default_sa.go:55] duration metric: took 196.667806ms for default service account to be created ...
	I0815 01:35:09.720574   66492 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 01:35:09.923419   66492 system_pods.go:86] 9 kube-system pods found
	I0815 01:35:09.923454   66492 system_pods.go:89] "coredns-6f6b679f8f-srq48" [e9520ab8-24d6-410d-bcba-b59e91e817a9] Running
	I0815 01:35:09.923463   66492 system_pods.go:89] "coredns-6f6b679f8f-t77b6" [fcdf11ef-28a6-428c-b033-e29b51af8f0e] Running
	I0815 01:35:09.923471   66492 system_pods.go:89] "etcd-no-preload-884893" [fa960cfe-331d-4656-93e9-a58921bd62de] Running
	I0815 01:35:09.923477   66492 system_pods.go:89] "kube-apiserver-no-preload-884893" [7a8244fb-aa58-4e8e-957a-f3fbd388837b] Running
	I0815 01:35:09.923484   66492 system_pods.go:89] "kube-controller-manager-no-preload-884893" [0b6c5424-6fe4-42b6-b081-4409f90db35f] Running
	I0815 01:35:09.923490   66492 system_pods.go:89] "kube-proxy-dpggv" [55ef2a4b-a502-452d-a3bd-df1209ff247b] Running
	I0815 01:35:09.923494   66492 system_pods.go:89] "kube-scheduler-no-preload-884893" [cd295ee0-1897-4cd3-896d-09dd36842248] Running
	I0815 01:35:09.923502   66492 system_pods.go:89] "metrics-server-6867b74b74-w47b2" [7423be62-ae01-4b3f-9e24-049f4788f32f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 01:35:09.923509   66492 system_pods.go:89] "storage-provisioner" [b4cf6d02-281f-4fb5-9ff7-c36143d3af58] Running
	I0815 01:35:09.923524   66492 system_pods.go:126] duration metric: took 202.943928ms to wait for k8s-apps to be running ...
	I0815 01:35:09.923533   66492 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 01:35:09.923586   66492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 01:35:09.938893   66492 system_svc.go:56] duration metric: took 15.353021ms WaitForService to wait for kubelet
	I0815 01:35:09.938917   66492 kubeadm.go:582] duration metric: took 10.655355721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 01:35:09.938942   66492 node_conditions.go:102] verifying NodePressure condition ...
	I0815 01:35:10.120692   66492 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 01:35:10.120717   66492 node_conditions.go:123] node cpu capacity is 2
	I0815 01:35:10.120728   66492 node_conditions.go:105] duration metric: took 181.7794ms to run NodePressure ...
	I0815 01:35:10.120739   66492 start.go:241] waiting for startup goroutines ...
	I0815 01:35:10.120746   66492 start.go:246] waiting for cluster config update ...
	I0815 01:35:10.120754   66492 start.go:255] writing updated cluster config ...
	I0815 01:35:10.121019   66492 ssh_runner.go:195] Run: rm -f paused
	I0815 01:35:10.172726   66492 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 01:35:10.174631   66492 out.go:177] * Done! kubectl is now configured to use "no-preload-884893" cluster and "default" namespace by default
	I0815 01:36:56.608471   66919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 01:36:56.608611   66919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 01:36:56.610133   66919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 01:36:56.610200   66919 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 01:36:56.610290   66919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 01:36:56.610405   66919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 01:36:56.610524   66919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 01:36:56.610616   66919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 01:36:56.612092   66919 out.go:204]   - Generating certificates and keys ...
	I0815 01:36:56.612184   66919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 01:36:56.612246   66919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 01:36:56.612314   66919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 01:36:56.612371   66919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 01:36:56.612431   66919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 01:36:56.612482   66919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 01:36:56.612534   66919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 01:36:56.612585   66919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 01:36:56.612697   66919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 01:36:56.612796   66919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 01:36:56.612859   66919 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 01:36:56.613044   66919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 01:36:56.613112   66919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 01:36:56.613157   66919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 01:36:56.613244   66919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 01:36:56.613322   66919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 01:36:56.613455   66919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 01:36:56.613565   66919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 01:36:56.613631   66919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 01:36:56.613729   66919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 01:36:56.615023   66919 out.go:204]   - Booting up control plane ...
	I0815 01:36:56.615129   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 01:36:56.615203   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 01:36:56.615260   66919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 01:36:56.615330   66919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 01:36:56.615485   66919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 01:36:56.615542   66919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 01:36:56.615620   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.615805   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.615892   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616085   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616149   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616297   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616355   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616555   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616646   66919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 01:36:56.616833   66919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 01:36:56.616842   66919 kubeadm.go:310] 
	I0815 01:36:56.616873   66919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 01:36:56.616905   66919 kubeadm.go:310] 		timed out waiting for the condition
	I0815 01:36:56.616912   66919 kubeadm.go:310] 
	I0815 01:36:56.616939   66919 kubeadm.go:310] 	This error is likely caused by:
	I0815 01:36:56.616969   66919 kubeadm.go:310] 		- The kubelet is not running
	I0815 01:36:56.617073   66919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 01:36:56.617089   66919 kubeadm.go:310] 
	I0815 01:36:56.617192   66919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 01:36:56.617220   66919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 01:36:56.617255   66919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 01:36:56.617263   66919 kubeadm.go:310] 
	I0815 01:36:56.617393   66919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 01:36:56.617469   66919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 01:36:56.617478   66919 kubeadm.go:310] 
	I0815 01:36:56.617756   66919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 01:36:56.617889   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 01:36:56.617967   66919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 01:36:56.618057   66919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 01:36:56.618070   66919 kubeadm.go:310] 
	I0815 01:36:56.618125   66919 kubeadm.go:394] duration metric: took 8m2.571608887s to StartCluster
	I0815 01:36:56.618169   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 01:36:56.618222   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 01:36:56.659324   66919 cri.go:89] found id: ""
	I0815 01:36:56.659353   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.659365   66919 logs.go:278] No container was found matching "kube-apiserver"
	I0815 01:36:56.659372   66919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 01:36:56.659443   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 01:36:56.695979   66919 cri.go:89] found id: ""
	I0815 01:36:56.696003   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.696010   66919 logs.go:278] No container was found matching "etcd"
	I0815 01:36:56.696015   66919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 01:36:56.696063   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 01:36:56.730063   66919 cri.go:89] found id: ""
	I0815 01:36:56.730092   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.730100   66919 logs.go:278] No container was found matching "coredns"
	I0815 01:36:56.730106   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 01:36:56.730161   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 01:36:56.763944   66919 cri.go:89] found id: ""
	I0815 01:36:56.763969   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.763983   66919 logs.go:278] No container was found matching "kube-scheduler"
	I0815 01:36:56.763988   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 01:36:56.764047   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 01:36:56.798270   66919 cri.go:89] found id: ""
	I0815 01:36:56.798299   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.798307   66919 logs.go:278] No container was found matching "kube-proxy"
	I0815 01:36:56.798313   66919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 01:36:56.798366   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 01:36:56.832286   66919 cri.go:89] found id: ""
	I0815 01:36:56.832318   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.832328   66919 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 01:36:56.832335   66919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 01:36:56.832410   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 01:36:56.866344   66919 cri.go:89] found id: ""
	I0815 01:36:56.866380   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.866390   66919 logs.go:278] No container was found matching "kindnet"
	I0815 01:36:56.866398   66919 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 01:36:56.866461   66919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 01:36:56.904339   66919 cri.go:89] found id: ""
	I0815 01:36:56.904366   66919 logs.go:276] 0 containers: []
	W0815 01:36:56.904375   66919 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 01:36:56.904387   66919 logs.go:123] Gathering logs for describe nodes ...
	I0815 01:36:56.904405   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 01:36:56.982024   66919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 01:36:56.982045   66919 logs.go:123] Gathering logs for CRI-O ...
	I0815 01:36:56.982057   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 01:36:57.092250   66919 logs.go:123] Gathering logs for container status ...
	I0815 01:36:57.092288   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 01:36:57.157548   66919 logs.go:123] Gathering logs for kubelet ...
	I0815 01:36:57.157582   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 01:36:57.216511   66919 logs.go:123] Gathering logs for dmesg ...
	I0815 01:36:57.216563   66919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 01:36:57.230210   66919 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 01:36:57.230256   66919 out.go:239] * 
	W0815 01:36:57.230316   66919 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.230347   66919 out.go:239] * 
	W0815 01:36:57.231157   66919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 01:36:57.234003   66919 out.go:177] 
	W0815 01:36:57.235088   66919 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 01:36:57.235127   66919 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 01:36:57.235146   66919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 01:36:57.236647   66919 out.go:177] 
	
	
	==> CRI-O <==
	Aug 15 01:48:16 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:16.943144243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686496943122364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7b80fdd-bd29-4b1a-a51e-aecff4ea7474 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:48:16 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:16.943702046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1943f51d-32b0-4b71-8cc0-b4b23dadb74c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:48:16 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:16.943752983Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1943f51d-32b0-4b71-8cc0-b4b23dadb74c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:48:16 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:16.943785784Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1943f51d-32b0-4b71-8cc0-b4b23dadb74c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:48:16 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:16.979545687Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a84584e-a40f-424a-888b-4f0c6cd3819b name=/runtime.v1.RuntimeService/Version
	Aug 15 01:48:16 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:16.979673268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a84584e-a40f-424a-888b-4f0c6cd3819b name=/runtime.v1.RuntimeService/Version
	Aug 15 01:48:16 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:16.981078007Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=145dac45-c12e-4f24-b368-69dfde30d623 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:48:16 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:16.981520656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686496981459901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=145dac45-c12e-4f24-b368-69dfde30d623 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:48:16 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:16.982127940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da8c70c5-ff52-40d1-8a0a-e0ca55d074eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:48:16 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:16.982191326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da8c70c5-ff52-40d1-8a0a-e0ca55d074eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:48:16 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:16.982227167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=da8c70c5-ff52-40d1-8a0a-e0ca55d074eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.013134568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=725e0214-d489-40c9-b540-57f03f66a340 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.013227944Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=725e0214-d489-40c9-b540-57f03f66a340 name=/runtime.v1.RuntimeService/Version
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.014165988Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c252e662-ecc3-428b-afb9-8698e923ad2c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.014608417Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686497014582187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c252e662-ecc3-428b-afb9-8698e923ad2c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.015072424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f2a87b4-8c5a-43e7-bc88-e1f32cc27cba name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.015128780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f2a87b4-8c5a-43e7-bc88-e1f32cc27cba name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.015159476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3f2a87b4-8c5a-43e7-bc88-e1f32cc27cba name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.047261724Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3704a023-7800-4288-9efe-891c35f7f91e name=/runtime.v1.RuntimeService/Version
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.047365588Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3704a023-7800-4288-9efe-891c35f7f91e name=/runtime.v1.RuntimeService/Version
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.048534032Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27d5c1c9-4962-4627-a536-75cc48a25ee5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.048982798Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723686497048946694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27d5c1c9-4962-4627-a536-75cc48a25ee5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.049775660Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b49d6b3c-4a1e-4c0c-8963-0248fa06e0f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.049877016Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b49d6b3c-4a1e-4c0c-8963-0248fa06e0f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 01:48:17 old-k8s-version-390782 crio[654]: time="2024-08-15 01:48:17.049934628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b49d6b3c-4a1e-4c0c-8963-0248fa06e0f1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug15 01:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050416] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037789] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.678929] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.857055] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.487001] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.860898] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.063147] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057764] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.185464] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.131345] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.258818] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +5.930800] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.065041] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.685778] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	[Aug15 01:29] kauditd_printk_skb: 46 callbacks suppressed
	[Aug15 01:33] systemd-fstab-generator[5155]: Ignoring "noauto" option for root device
	[Aug15 01:35] systemd-fstab-generator[5437]: Ignoring "noauto" option for root device
	[  +0.071528] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:48:17 up 19 min,  0 users,  load average: 0.03, 0.04, 0.03
	Linux old-k8s-version-390782 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00074df58, 0x4f0ac20, 0xc0009aaf50, 0x1, 0xc0001000c0)
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d6c40, 0xc0001000c0)
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]: goroutine 124 [select]:
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc00075c4b0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0002c0ba0, 0x0, 0x0)
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000615a40)
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6927]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 15 01:48:15 old-k8s-version-390782 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 15 01:48:15 old-k8s-version-390782 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 15 01:48:15 old-k8s-version-390782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 138.
	Aug 15 01:48:15 old-k8s-version-390782 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 15 01:48:15 old-k8s-version-390782 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6954]: I0815 01:48:15.926300    6954 server.go:416] Version: v1.20.0
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6954]: I0815 01:48:15.926841    6954 server.go:837] Client rotation is on, will bootstrap in background
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6954]: I0815 01:48:15.929327    6954 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6954]: W0815 01:48:15.930462    6954 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 15 01:48:15 old-k8s-version-390782 kubelet[6954]: I0815 01:48:15.930553    6954 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-390782 -n old-k8s-version-390782
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 2 (224.233735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-390782" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (134.50s)

                                                
                                    

Test pass (245/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 29.46
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 13.91
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.05
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.57
22 TestOffline 109.2
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 155.15
31 TestAddons/serial/GCPAuth/Namespaces 0.15
33 TestAddons/parallel/Registry 15.24
35 TestAddons/parallel/InspektorGadget 11.76
37 TestAddons/parallel/HelmTiller 11.98
39 TestAddons/parallel/CSI 47.32
40 TestAddons/parallel/Headlamp 17.67
41 TestAddons/parallel/CloudSpanner 6.59
42 TestAddons/parallel/LocalPath 55.68
43 TestAddons/parallel/NvidiaDevicePlugin 6.47
44 TestAddons/parallel/Yakd 12.28
46 TestCertOptions 46.9
47 TestCertExpiration 344.34
49 TestForceSystemdFlag 71.44
50 TestForceSystemdEnv 44.66
52 TestKVMDriverInstallOrUpdate 4.11
56 TestErrorSpam/setup 36.58
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.7
59 TestErrorSpam/pause 1.47
60 TestErrorSpam/unpause 1.61
61 TestErrorSpam/stop 5.7
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 54.22
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 40.87
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.65
73 TestFunctional/serial/CacheCmd/cache/add_local 2.07
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 33.25
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.31
84 TestFunctional/serial/LogsFileCmd 1.27
85 TestFunctional/serial/InvalidService 4.44
87 TestFunctional/parallel/ConfigCmd 0.31
88 TestFunctional/parallel/DashboardCmd 12.98
89 TestFunctional/parallel/DryRun 0.27
90 TestFunctional/parallel/InternationalLanguage 0.12
91 TestFunctional/parallel/StatusCmd 0.92
95 TestFunctional/parallel/ServiceCmdConnect 6.47
96 TestFunctional/parallel/AddonsCmd 0.11
97 TestFunctional/parallel/PersistentVolumeClaim 43.52
99 TestFunctional/parallel/SSHCmd 0.41
100 TestFunctional/parallel/CpCmd 1.25
101 TestFunctional/parallel/MySQL 21.94
102 TestFunctional/parallel/FileSync 0.22
103 TestFunctional/parallel/CertSync 1.22
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
111 TestFunctional/parallel/License 0.61
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
119 TestFunctional/parallel/ImageCommands/ImageBuild 3.52
120 TestFunctional/parallel/ImageCommands/Setup 1.72
121 TestFunctional/parallel/ServiceCmd/DeployApp 51.17
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.28
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.19
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.95
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 6.49
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.71
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.57
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.3
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
139 TestFunctional/parallel/ProfileCmd/profile_list 0.27
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
141 TestFunctional/parallel/MountCmd/any-port 8.38
142 TestFunctional/parallel/Version/short 0.04
143 TestFunctional/parallel/Version/components 0.59
144 TestFunctional/parallel/MountCmd/specific-port 1.74
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.12
146 TestFunctional/parallel/ServiceCmd/List 1.22
147 TestFunctional/parallel/ServiceCmd/JSONOutput 1.22
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
149 TestFunctional/parallel/ServiceCmd/Format 0.27
150 TestFunctional/parallel/ServiceCmd/URL 0.27
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 195.62
158 TestMultiControlPlane/serial/DeployApp 6.95
159 TestMultiControlPlane/serial/PingHostFromPods 1.13
160 TestMultiControlPlane/serial/AddWorkerNode 59.59
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.51
163 TestMultiControlPlane/serial/CopyFile 12.11
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.37
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.52
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 453.53
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
174 TestMultiControlPlane/serial/AddSecondaryNode 75.97
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 75.63
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.67
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.59
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.58
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 83.18
211 TestMountStart/serial/StartWithMountFirst 31.05
212 TestMountStart/serial/VerifyMountFirst 0.36
213 TestMountStart/serial/StartWithMountSecond 24.42
214 TestMountStart/serial/VerifyMountSecond 0.35
215 TestMountStart/serial/DeleteFirst 0.69
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.26
218 TestMountStart/serial/RestartStopped 22.3
219 TestMountStart/serial/VerifyMountPostStop 0.35
222 TestMultiNode/serial/FreshStart2Nodes 111.13
223 TestMultiNode/serial/DeployApp2Nodes 5.34
224 TestMultiNode/serial/PingHostFrom2Pods 0.76
225 TestMultiNode/serial/AddNode 50.92
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.97
229 TestMultiNode/serial/StopNode 2.17
230 TestMultiNode/serial/StartAfterStop 38.85
232 TestMultiNode/serial/DeleteNode 2.14
234 TestMultiNode/serial/RestartMultiNode 174.89
235 TestMultiNode/serial/ValidateNameConflict 43.59
242 TestScheduledStopUnix 110.1
246 TestRunningBinaryUpgrade 214.64
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
259 TestNoKubernetes/serial/StartWithK8s 92.54
267 TestNetworkPlugins/group/false 3.18
271 TestNoKubernetes/serial/StartWithStopK8s 38.23
272 TestStoppedBinaryUpgrade/Setup 2.23
273 TestStoppedBinaryUpgrade/Upgrade 105.36
274 TestNoKubernetes/serial/Start 43.68
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
276 TestNoKubernetes/serial/ProfileList 29.95
277 TestNoKubernetes/serial/Stop 1.31
278 TestNoKubernetes/serial/StartNoArgs 21.76
279 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
282 TestPause/serial/Start 120.94
287 TestStartStop/group/no-preload/serial/FirstStart 101.63
289 TestStartStop/group/embed-certs/serial/FirstStart 85.28
290 TestStartStop/group/no-preload/serial/DeployApp 10.31
291 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
293 TestStartStop/group/embed-certs/serial/DeployApp 9.28
295 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.29
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
304 TestStartStop/group/no-preload/serial/SecondStart 680.25
306 TestStartStop/group/old-k8s-version/serial/Stop 3.28
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
309 TestStartStop/group/embed-certs/serial/SecondStart 582.38
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 479.71
321 TestStartStop/group/newest-cni/serial/FirstStart 47.87
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.9
324 TestStartStop/group/newest-cni/serial/Stop 9.51
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/newest-cni/serial/SecondStart 37.46
327 TestNetworkPlugins/group/auto/Start 100.98
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
331 TestStartStop/group/newest-cni/serial/Pause 4.05
332 TestNetworkPlugins/group/kindnet/Start 71.57
333 TestNetworkPlugins/group/calico/Start 81.36
334 TestNetworkPlugins/group/auto/KubeletFlags 0.21
335 TestNetworkPlugins/group/auto/NetCatPod 11.27
336 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
337 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
338 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
339 TestNetworkPlugins/group/auto/DNS 0.22
340 TestNetworkPlugins/group/auto/Localhost 0.15
341 TestNetworkPlugins/group/auto/HairPin 0.17
342 TestNetworkPlugins/group/kindnet/DNS 0.17
343 TestNetworkPlugins/group/kindnet/Localhost 0.14
344 TestNetworkPlugins/group/kindnet/HairPin 0.15
345 TestNetworkPlugins/group/custom-flannel/Start 74.1
346 TestNetworkPlugins/group/enable-default-cni/Start 78.66
347 TestNetworkPlugins/group/calico/ControllerPod 6.01
348 TestNetworkPlugins/group/calico/KubeletFlags 0.21
349 TestNetworkPlugins/group/calico/NetCatPod 11.36
350 TestNetworkPlugins/group/flannel/Start 104.14
351 TestNetworkPlugins/group/calico/DNS 0.18
352 TestNetworkPlugins/group/calico/Localhost 0.14
353 TestNetworkPlugins/group/calico/HairPin 0.13
354 TestNetworkPlugins/group/bridge/Start 119.49
355 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
356 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
357 TestNetworkPlugins/group/custom-flannel/DNS 0.17
358 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
359 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
360 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
361 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.27
362 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
363 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
364 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
365 TestNetworkPlugins/group/flannel/ControllerPod 6.01
366 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
367 TestNetworkPlugins/group/flannel/NetCatPod 9.21
368 TestNetworkPlugins/group/flannel/DNS 0.14
369 TestNetworkPlugins/group/flannel/Localhost 0.11
370 TestNetworkPlugins/group/flannel/HairPin 0.11
371 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
372 TestNetworkPlugins/group/bridge/NetCatPod 10.21
373 TestNetworkPlugins/group/bridge/DNS 0.16
374 TestNetworkPlugins/group/bridge/Localhost 0.11
375 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (29.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-024815 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-024815 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (29.459399816s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (29.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-024815
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-024815: exit status 85 (54.545884ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-024815 | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |          |
	|         | -p download-only-024815        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:05:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:05:25.367286   20291 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:05:25.367399   20291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:05:25.367408   20291 out.go:304] Setting ErrFile to fd 2...
	I0815 00:05:25.367413   20291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:05:25.367631   20291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	W0815 00:05:25.367769   20291 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19443-13088/.minikube/config/config.json: open /home/jenkins/minikube-integration/19443-13088/.minikube/config/config.json: no such file or directory
	I0815 00:05:25.368367   20291 out.go:298] Setting JSON to true
	I0815 00:05:25.369259   20291 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2870,"bootTime":1723677455,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:05:25.369313   20291 start.go:139] virtualization: kvm guest
	I0815 00:05:25.371550   20291 out.go:97] [download-only-024815] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0815 00:05:25.371644   20291 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 00:05:25.371673   20291 notify.go:220] Checking for updates...
	I0815 00:05:25.372986   20291 out.go:169] MINIKUBE_LOCATION=19443
	I0815 00:05:25.374270   20291 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:05:25.375495   20291 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:05:25.376559   20291 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:05:25.377683   20291 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0815 00:05:25.379729   20291 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 00:05:25.379971   20291 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:05:25.476067   20291 out.go:97] Using the kvm2 driver based on user configuration
	I0815 00:05:25.476107   20291 start.go:297] selected driver: kvm2
	I0815 00:05:25.476122   20291 start.go:901] validating driver "kvm2" against <nil>
	I0815 00:05:25.476448   20291 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:05:25.476577   20291 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 00:05:25.490863   20291 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 00:05:25.490908   20291 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:05:25.491413   20291 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0815 00:05:25.491572   20291 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 00:05:25.491633   20291 cni.go:84] Creating CNI manager for ""
	I0815 00:05:25.491647   20291 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 00:05:25.491654   20291 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 00:05:25.491703   20291 start.go:340] cluster config:
	{Name:download-only-024815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-024815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:05:25.491863   20291 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:05:25.493527   20291 out.go:97] Downloading VM boot image ...
	I0815 00:05:25.493561   20291 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19443-13088/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 00:05:36.046836   20291 out.go:97] Starting "download-only-024815" primary control-plane node in "download-only-024815" cluster
	I0815 00:05:36.046867   20291 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 00:05:36.580317   20291 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:05:36.580357   20291 cache.go:56] Caching tarball of preloaded images
	I0815 00:05:36.580511   20291 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 00:05:36.582142   20291 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 00:05:36.582158   20291 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 00:05:37.126057   20291 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-024815 host does not exist
	  To start a cluster, run: "minikube start -p download-only-024815"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-024815
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (13.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-303162 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-303162 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.910347239s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (13.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-303162
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-303162: exit status 85 (53.577225ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-024815 | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |                     |
	|         | -p download-only-024815        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC | 15 Aug 24 00:05 UTC |
	| delete  | -p download-only-024815        | download-only-024815 | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC | 15 Aug 24 00:05 UTC |
	| start   | -o=json --download-only        | download-only-303162 | jenkins | v1.33.1 | 15 Aug 24 00:05 UTC |                     |
	|         | -p download-only-303162        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 00:05:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 00:05:55.125109   20570 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:05:55.125354   20570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:05:55.125362   20570 out.go:304] Setting ErrFile to fd 2...
	I0815 00:05:55.125367   20570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:05:55.125535   20570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:05:55.126051   20570 out.go:298] Setting JSON to true
	I0815 00:05:55.126840   20570 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2900,"bootTime":1723677455,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:05:55.126889   20570 start.go:139] virtualization: kvm guest
	I0815 00:05:55.128927   20570 out.go:97] [download-only-303162] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:05:55.129076   20570 notify.go:220] Checking for updates...
	I0815 00:05:55.130503   20570 out.go:169] MINIKUBE_LOCATION=19443
	I0815 00:05:55.131829   20570 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:05:55.132835   20570 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:05:55.133921   20570 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:05:55.134993   20570 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0815 00:05:55.137029   20570 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 00:05:55.137237   20570 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:05:55.167731   20570 out.go:97] Using the kvm2 driver based on user configuration
	I0815 00:05:55.167763   20570 start.go:297] selected driver: kvm2
	I0815 00:05:55.167767   20570 start.go:901] validating driver "kvm2" against <nil>
	I0815 00:05:55.168058   20570 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:05:55.168131   20570 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19443-13088/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 00:05:55.183717   20570 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 00:05:55.183753   20570 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 00:05:55.184195   20570 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0815 00:05:55.184325   20570 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 00:05:55.184378   20570 cni.go:84] Creating CNI manager for ""
	I0815 00:05:55.184390   20570 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 00:05:55.184399   20570 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 00:05:55.184440   20570 start.go:340] cluster config:
	{Name:download-only-303162 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-303162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:05:55.184522   20570 iso.go:125] acquiring lock: {Name:mk32aeaa0100c55740e9f02cdcbc99755de867ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 00:05:55.185939   20570 out.go:97] Starting "download-only-303162" primary control-plane node in "download-only-303162" cluster
	I0815 00:05:55.185953   20570 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:05:55.684247   20570 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 00:05:55.684281   20570 cache.go:56] Caching tarball of preloaded images
	I0815 00:05:55.684436   20570 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 00:05:55.686435   20570 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0815 00:05:55.686450   20570 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 00:05:55.785150   20570 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19443-13088/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-303162 host does not exist
	  To start a cluster, run: "minikube start -p download-only-303162"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-303162
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-836990 --alsologtostderr --binary-mirror http://127.0.0.1:37773 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-836990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-836990
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (109.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-278022 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-278022 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m48.211611873s)
helpers_test.go:175: Cleaning up "offline-crio-278022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-278022
--- PASS: TestOffline (109.20s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-799058
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-799058: exit status 85 (45.766359ms)

                                                
                                                
-- stdout --
	* Profile "addons-799058" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-799058"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-799058
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-799058: exit status 85 (45.378559ms)

                                                
                                                
-- stdout --
	* Profile "addons-799058" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-799058"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (155.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-799058 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-799058 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m35.154247577s)
--- PASS: TestAddons/Setup (155.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-799058 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-799058 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.953818ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-fwfvr" [0c0970af-9934-491e-bcfa-fa54ed7e0e3e] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003325152s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kq9fl" [58301448-7012-48c0-8f9b-a5da1d7ebb5b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003939188s
addons_test.go:342: (dbg) Run:  kubectl --context addons-799058 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-799058 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-799058 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.524414284s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 ip
2024/08/15 00:09:18 [DEBUG] GET http://192.168.39.195:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.24s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2mt4t" [96a6037a-4972-4fb5-a861-31b0b54f75b9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003498016s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-799058
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-799058: (5.759581256s)
--- PASS: TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.263561ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-xd29w" [792a4027-3c8e-4383-ae2c-9615a900c9a9] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.003842686s
addons_test.go:475: (dbg) Run:  kubectl --context addons-799058 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-799058 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.432780213s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 10.659935ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-799058 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-799058 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [798aa1f1-370d-468c-9ed9-2d5c757a173d] Pending
helpers_test.go:344: "task-pv-pod" [798aa1f1-370d-468c-9ed9-2d5c757a173d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [798aa1f1-370d-468c-9ed9-2d5c757a173d] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004144008s
addons_test.go:590: (dbg) Run:  kubectl --context addons-799058 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-799058 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-799058 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-799058 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-799058 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-799058 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-799058 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [35a80163-8083-4bd9-97f7-ee00337c64eb] Pending
helpers_test.go:344: "task-pv-pod-restore" [35a80163-8083-4bd9-97f7-ee00337c64eb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [35a80163-8083-4bd9-97f7-ee00337c64eb] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003691389s
addons_test.go:632: (dbg) Run:  kubectl --context addons-799058 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-799058 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-799058 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-799058 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.659524652s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-799058 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-fcqnb" [6eacab4b-2e15-4e7c-aa90-af975f52d533] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-fcqnb" [6eacab4b-2e15-4e7c-aa90-af975f52d533] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004702969s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-799058 addons disable headlamp --alsologtostderr -v=1: (5.695112809s)
--- PASS: TestAddons/parallel/Headlamp (17.67s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-69pgr" [244f981a-1b71-4886-8d03-b08d769ddc3e] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00372841s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-799058
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.68s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-799058 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-799058 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-799058 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f5c961e0-c639-4f18-88e7-48b9c29c4689] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f5c961e0-c639-4f18-88e7-48b9c29c4689] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f5c961e0-c639-4f18-88e7-48b9c29c4689] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005349256s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-799058 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 ssh "cat /opt/local-path-provisioner/pvc-91dd3a08-78ae-4a50-9888-964894be42ae_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-799058 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-799058 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-799058 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.905569772s)
--- PASS: TestAddons/parallel/LocalPath (55.68s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4jqvz" [86f19320-28d1-4fc0-9865-20a09c4e891a] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003641176s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-799058
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-cjv7g" [d3133423-db6f-4957-a80e-3d822d6c67c7] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003226691s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-799058 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-799058 addons disable yakd --alsologtostderr -v=1: (6.276790332s)
--- PASS: TestAddons/parallel/Yakd (12.28s)

                                                
                                    
x
+
TestCertOptions (46.9s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-411164 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-411164 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (45.487458554s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-411164 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-411164 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-411164 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-411164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-411164
--- PASS: TestCertOptions (46.90s)

                                                
                                    
x
+
TestCertExpiration (344.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-131152 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-131152 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (50.059545718s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-131152 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-131152 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m53.264926335s)
helpers_test.go:175: Cleaning up "cert-expiration-131152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-131152
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-131152: (1.014888628s)
--- PASS: TestCertExpiration (344.34s)

                                                
                                    
x
+
TestForceSystemdFlag (71.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-221548 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-221548 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.245757542s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-221548 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-221548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-221548
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-221548: (1.000045889s)
--- PASS: TestForceSystemdFlag (71.44s)

                                                
                                    
x
+
TestForceSystemdEnv (44.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-706555 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-706555 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.626313956s)
helpers_test.go:175: Cleaning up "force-systemd-env-706555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-706555
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-706555: (1.0294414s)
--- PASS: TestForceSystemdEnv (44.66s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.11s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.11s)

                                                
                                    
x
+
TestErrorSpam/setup (36.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-887692 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-887692 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-887692 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-887692 --driver=kvm2  --container-runtime=crio: (36.584835299s)
--- PASS: TestErrorSpam/setup (36.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (5.7s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 stop: (2.301852257s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 stop: (2.064264996s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-887692 --log_dir /tmp/nospam-887692 stop: (1.328696975s)
--- PASS: TestErrorSpam/stop (5.70s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19443-13088/.minikube/files/etc/test/nested/copy/20279/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-732793 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-732793 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (54.22292846s)
--- PASS: TestFunctional/serial/StartWithProxy (54.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-732793 --alsologtostderr -v=8
E0815 00:18:45.640373   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:45.647363   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:45.658732   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:45.680044   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:45.721382   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:45.802753   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:45.964275   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:46.286291   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:46.928558   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:48.210607   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:18:50.772161   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-732793 --alsologtostderr -v=8: (40.87275094s)
functional_test.go:663: soft start took 40.873407796s for "functional-732793" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-732793 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 cache add registry.k8s.io/pause:3.1: (1.190084196s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 cache add registry.k8s.io/pause:3.3: (1.22545036s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 cache add registry.k8s.io/pause:latest
E0815 00:18:55.893598   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 cache add registry.k8s.io/pause:latest: (1.236268522s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-732793 /tmp/TestFunctionalserialCacheCmdcacheadd_local3278306033/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 cache add minikube-local-cache-test:functional-732793
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 cache add minikube-local-cache-test:functional-732793: (1.756823009s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 cache delete minikube-local-cache-test:functional-732793
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-732793
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732793 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (207.61755ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 kubectl -- --context functional-732793 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-732793 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-732793 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0815 00:19:06.135032   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:19:26.616809   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-732793 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.245169217s)
functional_test.go:761: restart took 33.245308508s for "functional-732793" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-732793 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 logs: (1.305329238s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 logs --file /tmp/TestFunctionalserialLogsFileCmd3045584476/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 logs --file /tmp/TestFunctionalserialLogsFileCmd3045584476/001/logs.txt: (1.264777773s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.44s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-732793 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-732793
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-732793: exit status 115 (268.752184ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.240:30428 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-732793 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732793 config get cpus: exit status 14 (56.077493ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732793 config get cpus: exit status 14 (50.508842ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-732793 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-732793 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 29602: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-732793 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-732793 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (149.836239ms)

                                                
                                                
-- stdout --
	* [functional-732793] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:20:05.300211   29322 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:20:05.300354   29322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:20:05.300370   29322 out.go:304] Setting ErrFile to fd 2...
	I0815 00:20:05.300377   29322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:20:05.300824   29322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:20:05.301631   29322 out.go:298] Setting JSON to false
	I0815 00:20:05.303057   29322 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3750,"bootTime":1723677455,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:20:05.303152   29322 start.go:139] virtualization: kvm guest
	I0815 00:20:05.304796   29322 out.go:177] * [functional-732793] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 00:20:05.306620   29322 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:20:05.306667   29322 notify.go:220] Checking for updates...
	I0815 00:20:05.308704   29322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:20:05.309814   29322 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:20:05.310966   29322 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:20:05.312421   29322 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:20:05.313848   29322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:20:05.315424   29322 config.go:182] Loaded profile config "functional-732793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:20:05.316065   29322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:20:05.316166   29322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:20:05.336074   29322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33701
	I0815 00:20:05.336408   29322 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:20:05.336995   29322 main.go:141] libmachine: Using API Version  1
	I0815 00:20:05.337018   29322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:20:05.337308   29322 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:20:05.337515   29322 main.go:141] libmachine: (functional-732793) Calling .DriverName
	I0815 00:20:05.337738   29322 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:20:05.338274   29322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:20:05.338319   29322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:20:05.353750   29322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0815 00:20:05.354115   29322 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:20:05.354568   29322 main.go:141] libmachine: Using API Version  1
	I0815 00:20:05.354594   29322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:20:05.354899   29322 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:20:05.355060   29322 main.go:141] libmachine: (functional-732793) Calling .DriverName
	I0815 00:20:05.389181   29322 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 00:20:05.390330   29322 start.go:297] selected driver: kvm2
	I0815 00:20:05.390360   29322 start.go:901] validating driver "kvm2" against &{Name:functional-732793 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-732793 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:20:05.390451   29322 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:20:05.392410   29322 out.go:177] 
	W0815 00:20:05.393635   29322 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0815 00:20:05.394741   29322 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-732793 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-732793 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-732793 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (121.308674ms)

                                                
                                                
-- stdout --
	* [functional-732793] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:20:05.559973   29409 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:20:05.560256   29409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:20:05.560265   29409 out.go:304] Setting ErrFile to fd 2...
	I0815 00:20:05.560276   29409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:20:05.560549   29409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:20:05.561064   29409 out.go:298] Setting JSON to false
	I0815 00:20:05.562134   29409 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3751,"bootTime":1723677455,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 00:20:05.562206   29409 start.go:139] virtualization: kvm guest
	I0815 00:20:05.564639   29409 out.go:177] * [functional-732793] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0815 00:20:05.565827   29409 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 00:20:05.565863   29409 notify.go:220] Checking for updates...
	I0815 00:20:05.567883   29409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 00:20:05.569019   29409 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 00:20:05.570054   29409 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 00:20:05.571036   29409 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 00:20:05.572110   29409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 00:20:05.573350   29409 config.go:182] Loaded profile config "functional-732793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:20:05.573752   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:20:05.573799   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:20:05.588026   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0815 00:20:05.588374   29409 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:20:05.588852   29409 main.go:141] libmachine: Using API Version  1
	I0815 00:20:05.588877   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:20:05.589162   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:20:05.589365   29409 main.go:141] libmachine: (functional-732793) Calling .DriverName
	I0815 00:20:05.589606   29409 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 00:20:05.589927   29409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:20:05.589975   29409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:20:05.603989   29409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I0815 00:20:05.604283   29409 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:20:05.604750   29409 main.go:141] libmachine: Using API Version  1
	I0815 00:20:05.604774   29409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:20:05.605033   29409 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:20:05.605181   29409 main.go:141] libmachine: (functional-732793) Calling .DriverName
	I0815 00:20:05.635120   29409 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0815 00:20:05.636110   29409 start.go:297] selected driver: kvm2
	I0815 00:20:05.636133   29409 start.go:901] validating driver "kvm2" against &{Name:functional-732793 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-732793 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 00:20:05.636271   29409 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 00:20:05.638080   29409 out.go:177] 
	W0815 00:20:05.639048   29409 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0815 00:20:05.639992   29409 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-732793 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-732793 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-j589j" [4864c506-6482-45f8-80b1-9187d0cf5e93] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-j589j" [4864c506-6482-45f8-80b1-9187d0cf5e93] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.00537742s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.240:32702
functional_test.go:1675: http://192.168.39.240:32702: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-j589j

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.240:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.240:32702
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.47s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ce294938-62db-4599-a30d-f979a9ba8e60] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.007681727s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-732793 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-732793 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-732793 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-732793 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-732793 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [56e3a3e3-4460-47f4-9c88-4bc418f6576b] Pending
helpers_test.go:344: "sp-pod" [56e3a3e3-4460-47f4-9c88-4bc418f6576b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [56e3a3e3-4460-47f4-9c88-4bc418f6576b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.00459995s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-732793 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-732793 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-732793 delete -f testdata/storage-provisioner/pod.yaml: (1.437425706s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-732793 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e4453755-3809-478c-a56d-deaa95effb09] Pending
helpers_test.go:344: "sp-pod" [e4453755-3809-478c-a56d-deaa95effb09] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e4453755-3809-478c-a56d-deaa95effb09] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004281494s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-732793 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh -n functional-732793 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 cp functional-732793:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd202140956/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh -n functional-732793 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh -n functional-732793 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-732793 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-hz267" [a0ab856f-4546-4b98-aaa7-dc2319f60c22] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-hz267" [a0ab856f-4546-4b98-aaa7-dc2319f60c22] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.108871213s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-732793 exec mysql-6cdb49bbb-hz267 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-732793 exec mysql-6cdb49bbb-hz267 -- mysql -ppassword -e "show databases;": exit status 1 (350.996749ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-732793 exec mysql-6cdb49bbb-hz267 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-732793 exec mysql-6cdb49bbb-hz267 -- mysql -ppassword -e "show databases;": exit status 1 (364.531177ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-732793 exec mysql-6cdb49bbb-hz267 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/20279/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "sudo cat /etc/test/nested/copy/20279/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/20279.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "sudo cat /etc/ssl/certs/20279.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/20279.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "sudo cat /usr/share/ca-certificates/20279.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/202792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "sudo cat /etc/ssl/certs/202792.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/202792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "sudo cat /usr/share/ca-certificates/202792.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-732793 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732793 ssh "sudo systemctl is-active docker": exit status 1 (226.09021ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732793 ssh "sudo systemctl is-active containerd": exit status 1 (221.895321ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-732793 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-732793
localhost/kicbase/echo-server:functional-732793
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-732793 image ls --format short --alsologtostderr:
I0815 00:20:16.743587   30143 out.go:291] Setting OutFile to fd 1 ...
I0815 00:20:16.743714   30143 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:20:16.743724   30143 out.go:304] Setting ErrFile to fd 2...
I0815 00:20:16.743731   30143 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:20:16.743913   30143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
I0815 00:20:16.744443   30143 config.go:182] Loaded profile config "functional-732793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:20:16.744558   30143 config.go:182] Loaded profile config "functional-732793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:20:16.745010   30143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 00:20:16.745067   30143 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 00:20:16.760090   30143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34129
I0815 00:20:16.760515   30143 main.go:141] libmachine: () Calling .GetVersion
I0815 00:20:16.761085   30143 main.go:141] libmachine: Using API Version  1
I0815 00:20:16.761108   30143 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 00:20:16.761392   30143 main.go:141] libmachine: () Calling .GetMachineName
I0815 00:20:16.761596   30143 main.go:141] libmachine: (functional-732793) Calling .GetState
I0815 00:20:16.763236   30143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 00:20:16.763298   30143 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 00:20:16.777861   30143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
I0815 00:20:16.778253   30143 main.go:141] libmachine: () Calling .GetVersion
I0815 00:20:16.778668   30143 main.go:141] libmachine: Using API Version  1
I0815 00:20:16.778690   30143 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 00:20:16.778959   30143 main.go:141] libmachine: () Calling .GetMachineName
I0815 00:20:16.779104   30143 main.go:141] libmachine: (functional-732793) Calling .DriverName
I0815 00:20:16.779304   30143 ssh_runner.go:195] Run: systemctl --version
I0815 00:20:16.779333   30143 main.go:141] libmachine: (functional-732793) Calling .GetSSHHostname
I0815 00:20:16.781570   30143 main.go:141] libmachine: (functional-732793) DBG | domain functional-732793 has defined MAC address 52:54:00:93:d5:62 in network mk-functional-732793
I0815 00:20:16.781925   30143 main.go:141] libmachine: (functional-732793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:d5:62", ip: ""} in network mk-functional-732793: {Iface:virbr1 ExpiryTime:2024-08-15 01:17:31 +0000 UTC Type:0 Mac:52:54:00:93:d5:62 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:functional-732793 Clientid:01:52:54:00:93:d5:62}
I0815 00:20:16.781957   30143 main.go:141] libmachine: (functional-732793) DBG | domain functional-732793 has defined IP address 192.168.39.240 and MAC address 52:54:00:93:d5:62 in network mk-functional-732793
I0815 00:20:16.782050   30143 main.go:141] libmachine: (functional-732793) Calling .GetSSHPort
I0815 00:20:16.782205   30143 main.go:141] libmachine: (functional-732793) Calling .GetSSHKeyPath
I0815 00:20:16.782331   30143 main.go:141] libmachine: (functional-732793) Calling .GetSSHUsername
I0815 00:20:16.782455   30143 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/functional-732793/id_rsa Username:docker}
I0815 00:20:16.866847   30143 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 00:20:16.909534   30143 main.go:141] libmachine: Making call to close driver server
I0815 00:20:16.909549   30143 main.go:141] libmachine: (functional-732793) Calling .Close
I0815 00:20:16.909801   30143 main.go:141] libmachine: Successfully made call to close driver server
I0815 00:20:16.909853   30143 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 00:20:16.909871   30143 main.go:141] libmachine: Making call to close driver server
I0815 00:20:16.909880   30143 main.go:141] libmachine: (functional-732793) Calling .Close
I0815 00:20:16.909817   30143 main.go:141] libmachine: (functional-732793) DBG | Closing plugin on server side
I0815 00:20:16.910069   30143 main.go:141] libmachine: Successfully made call to close driver server
I0815 00:20:16.910082   30143 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-732793 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-732793  | 968f4c97ccd6b | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 900dca2a61f57 | 192MB  |
| localhost/kicbase/echo-server           | functional-732793  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-732793 image ls --format table --alsologtostderr:
I0815 00:20:19.557092   30287 out.go:291] Setting OutFile to fd 1 ...
I0815 00:20:19.557421   30287 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:20:19.557434   30287 out.go:304] Setting ErrFile to fd 2...
I0815 00:20:19.557440   30287 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:20:19.557743   30287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
I0815 00:20:19.558502   30287 config.go:182] Loaded profile config "functional-732793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:20:19.558671   30287 config.go:182] Loaded profile config "functional-732793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:20:19.559236   30287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 00:20:19.559301   30287 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 00:20:19.574253   30287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33507
I0815 00:20:19.574782   30287 main.go:141] libmachine: () Calling .GetVersion
I0815 00:20:19.575420   30287 main.go:141] libmachine: Using API Version  1
I0815 00:20:19.575440   30287 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 00:20:19.575771   30287 main.go:141] libmachine: () Calling .GetMachineName
I0815 00:20:19.575956   30287 main.go:141] libmachine: (functional-732793) Calling .GetState
I0815 00:20:19.578106   30287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 00:20:19.578151   30287 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 00:20:19.592709   30287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42327
I0815 00:20:19.593200   30287 main.go:141] libmachine: () Calling .GetVersion
I0815 00:20:19.593810   30287 main.go:141] libmachine: Using API Version  1
I0815 00:20:19.593846   30287 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 00:20:19.594188   30287 main.go:141] libmachine: () Calling .GetMachineName
I0815 00:20:19.594352   30287 main.go:141] libmachine: (functional-732793) Calling .DriverName
I0815 00:20:19.594592   30287 ssh_runner.go:195] Run: systemctl --version
I0815 00:20:19.594621   30287 main.go:141] libmachine: (functional-732793) Calling .GetSSHHostname
I0815 00:20:19.597423   30287 main.go:141] libmachine: (functional-732793) DBG | domain functional-732793 has defined MAC address 52:54:00:93:d5:62 in network mk-functional-732793
I0815 00:20:19.597819   30287 main.go:141] libmachine: (functional-732793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:d5:62", ip: ""} in network mk-functional-732793: {Iface:virbr1 ExpiryTime:2024-08-15 01:17:31 +0000 UTC Type:0 Mac:52:54:00:93:d5:62 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:functional-732793 Clientid:01:52:54:00:93:d5:62}
I0815 00:20:19.597848   30287 main.go:141] libmachine: (functional-732793) DBG | domain functional-732793 has defined IP address 192.168.39.240 and MAC address 52:54:00:93:d5:62 in network mk-functional-732793
I0815 00:20:19.597956   30287 main.go:141] libmachine: (functional-732793) Calling .GetSSHPort
I0815 00:20:19.598135   30287 main.go:141] libmachine: (functional-732793) Calling .GetSSHKeyPath
I0815 00:20:19.598310   30287 main.go:141] libmachine: (functional-732793) Calling .GetSSHUsername
I0815 00:20:19.598468   30287 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/functional-732793/id_rsa Username:docker}
I0815 00:20:19.695393   30287 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 00:20:19.744428   30287 main.go:141] libmachine: Making call to close driver server
I0815 00:20:19.744449   30287 main.go:141] libmachine: (functional-732793) Calling .Close
I0815 00:20:19.744765   30287 main.go:141] libmachine: (functional-732793) DBG | Closing plugin on server side
I0815 00:20:19.744774   30287 main.go:141] libmachine: Successfully made call to close driver server
I0815 00:20:19.744799   30287 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 00:20:19.744816   30287 main.go:141] libmachine: Making call to close driver server
I0815 00:20:19.744829   30287 main.go:141] libmachine: (functional-732793) Calling .Close
I0815 00:20:19.745044   30287 main.go:141] libmachine: Successfully made call to close driver server
I0815 00:20:19.745056   30287 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-732793 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"968f4c97ccd6b671d9c41ebc972839775be4caf1a70aad84471ffbe07ef59fd9","repoDigests":["localhost/minikube-local-cache-test@sha256:0c8754fce9b4fb7e953384fed57fab199393f142f2c25790fdbfe99ed5b407f1"],"repoTags":["localhost/minikube-local-cache-test:functional-732793"],"size":"3330"},{"id":"0457335668
33c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":[],"size":"1462480"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568c
a9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a"
,"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0
476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"900dca2a61f5799aabe662339a940cf444dfd39777648ca6a953f82b685997ed","repoDigests":["docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40","docker.io/library/nginx@sha256:a3ab061d6909191271bcf24b9ab6eee9e8fc5f2fbf1525c5bd84d21f27a9d708"],"repoTags":["docker.io/library/nginx:l
atest"],"size":"191750286"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-732793"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha
256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-732793 image ls --format json --alsologtostderr:
I0815 00:20:19.272147   30263 out.go:291] Setting OutFile to fd 1 ...
I0815 00:20:19.272266   30263 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:20:19.272276   30263 out.go:304] Setting ErrFile to fd 2...
I0815 00:20:19.272280   30263 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:20:19.272456   30263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
I0815 00:20:19.273104   30263 config.go:182] Loaded profile config "functional-732793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:20:19.273216   30263 config.go:182] Loaded profile config "functional-732793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:20:19.273584   30263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 00:20:19.273642   30263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 00:20:19.288457   30263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
I0815 00:20:19.289061   30263 main.go:141] libmachine: () Calling .GetVersion
I0815 00:20:19.289941   30263 main.go:141] libmachine: Using API Version  1
I0815 00:20:19.289972   30263 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 00:20:19.290373   30263 main.go:141] libmachine: () Calling .GetMachineName
I0815 00:20:19.290573   30263 main.go:141] libmachine: (functional-732793) Calling .GetState
I0815 00:20:19.292560   30263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 00:20:19.292610   30263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 00:20:19.308124   30263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37365
I0815 00:20:19.308595   30263 main.go:141] libmachine: () Calling .GetVersion
I0815 00:20:19.309122   30263 main.go:141] libmachine: Using API Version  1
I0815 00:20:19.309142   30263 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 00:20:19.309511   30263 main.go:141] libmachine: () Calling .GetMachineName
I0815 00:20:19.309737   30263 main.go:141] libmachine: (functional-732793) Calling .DriverName
I0815 00:20:19.309972   30263 ssh_runner.go:195] Run: systemctl --version
I0815 00:20:19.310007   30263 main.go:141] libmachine: (functional-732793) Calling .GetSSHHostname
I0815 00:20:19.312936   30263 main.go:141] libmachine: (functional-732793) DBG | domain functional-732793 has defined MAC address 52:54:00:93:d5:62 in network mk-functional-732793
I0815 00:20:19.313305   30263 main.go:141] libmachine: (functional-732793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:d5:62", ip: ""} in network mk-functional-732793: {Iface:virbr1 ExpiryTime:2024-08-15 01:17:31 +0000 UTC Type:0 Mac:52:54:00:93:d5:62 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:functional-732793 Clientid:01:52:54:00:93:d5:62}
I0815 00:20:19.313341   30263 main.go:141] libmachine: (functional-732793) DBG | domain functional-732793 has defined IP address 192.168.39.240 and MAC address 52:54:00:93:d5:62 in network mk-functional-732793
I0815 00:20:19.313393   30263 main.go:141] libmachine: (functional-732793) Calling .GetSSHPort
I0815 00:20:19.313559   30263 main.go:141] libmachine: (functional-732793) Calling .GetSSHKeyPath
I0815 00:20:19.313720   30263 main.go:141] libmachine: (functional-732793) Calling .GetSSHUsername
I0815 00:20:19.313847   30263 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/functional-732793/id_rsa Username:docker}
I0815 00:20:19.410482   30263 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 00:20:19.498602   30263 main.go:141] libmachine: Making call to close driver server
I0815 00:20:19.498613   30263 main.go:141] libmachine: (functional-732793) Calling .Close
I0815 00:20:19.498886   30263 main.go:141] libmachine: Successfully made call to close driver server
I0815 00:20:19.498907   30263 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 00:20:19.498924   30263 main.go:141] libmachine: Making call to close driver server
I0815 00:20:19.498928   30263 main.go:141] libmachine: (functional-732793) DBG | Closing plugin on server side
I0815 00:20:19.498933   30263 main.go:141] libmachine: (functional-732793) Calling .Close
I0815 00:20:19.499192   30263 main.go:141] libmachine: Successfully made call to close driver server
I0815 00:20:19.499239   30263 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-732793 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 900dca2a61f5799aabe662339a940cf444dfd39777648ca6a953f82b685997ed
repoDigests:
- docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40
- docker.io/library/nginx@sha256:a3ab061d6909191271bcf24b9ab6eee9e8fc5f2fbf1525c5bd84d21f27a9d708
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 968f4c97ccd6b671d9c41ebc972839775be4caf1a70aad84471ffbe07ef59fd9
repoDigests:
- localhost/minikube-local-cache-test@sha256:0c8754fce9b4fb7e953384fed57fab199393f142f2c25790fdbfe99ed5b407f1
repoTags:
- localhost/minikube-local-cache-test:functional-732793
size: "3330"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-732793
size: "4943877"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-732793 image ls --format yaml --alsologtostderr:
I0815 00:20:16.954790   30183 out.go:291] Setting OutFile to fd 1 ...
I0815 00:20:16.955037   30183 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:20:16.955046   30183 out.go:304] Setting ErrFile to fd 2...
I0815 00:20:16.955051   30183 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:20:16.955292   30183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
I0815 00:20:16.955936   30183 config.go:182] Loaded profile config "functional-732793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:20:16.956044   30183 config.go:182] Loaded profile config "functional-732793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:20:16.956425   30183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 00:20:16.956474   30183 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 00:20:16.971220   30183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40709
I0815 00:20:16.971706   30183 main.go:141] libmachine: () Calling .GetVersion
I0815 00:20:16.972302   30183 main.go:141] libmachine: Using API Version  1
I0815 00:20:16.972328   30183 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 00:20:16.972811   30183 main.go:141] libmachine: () Calling .GetMachineName
I0815 00:20:16.973000   30183 main.go:141] libmachine: (functional-732793) Calling .GetState
I0815 00:20:16.975021   30183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 00:20:16.975078   30183 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 00:20:16.991163   30183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
I0815 00:20:16.991619   30183 main.go:141] libmachine: () Calling .GetVersion
I0815 00:20:16.992175   30183 main.go:141] libmachine: Using API Version  1
I0815 00:20:16.992206   30183 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 00:20:16.992583   30183 main.go:141] libmachine: () Calling .GetMachineName
I0815 00:20:16.992794   30183 main.go:141] libmachine: (functional-732793) Calling .DriverName
I0815 00:20:16.992986   30183 ssh_runner.go:195] Run: systemctl --version
I0815 00:20:16.993014   30183 main.go:141] libmachine: (functional-732793) Calling .GetSSHHostname
I0815 00:20:16.995516   30183 main.go:141] libmachine: (functional-732793) DBG | domain functional-732793 has defined MAC address 52:54:00:93:d5:62 in network mk-functional-732793
I0815 00:20:16.995926   30183 main.go:141] libmachine: (functional-732793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:d5:62", ip: ""} in network mk-functional-732793: {Iface:virbr1 ExpiryTime:2024-08-15 01:17:31 +0000 UTC Type:0 Mac:52:54:00:93:d5:62 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:functional-732793 Clientid:01:52:54:00:93:d5:62}
I0815 00:20:16.995959   30183 main.go:141] libmachine: (functional-732793) DBG | domain functional-732793 has defined IP address 192.168.39.240 and MAC address 52:54:00:93:d5:62 in network mk-functional-732793
I0815 00:20:16.996055   30183 main.go:141] libmachine: (functional-732793) Calling .GetSSHPort
I0815 00:20:16.996209   30183 main.go:141] libmachine: (functional-732793) Calling .GetSSHKeyPath
I0815 00:20:16.996352   30183 main.go:141] libmachine: (functional-732793) Calling .GetSSHUsername
I0815 00:20:16.996506   30183 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/functional-732793/id_rsa Username:docker}
I0815 00:20:17.075002   30183 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 00:20:17.110234   30183 main.go:141] libmachine: Making call to close driver server
I0815 00:20:17.110245   30183 main.go:141] libmachine: (functional-732793) Calling .Close
I0815 00:20:17.110514   30183 main.go:141] libmachine: Successfully made call to close driver server
I0815 00:20:17.110535   30183 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 00:20:17.110542   30183 main.go:141] libmachine: (functional-732793) DBG | Closing plugin on server side
I0815 00:20:17.110544   30183 main.go:141] libmachine: Making call to close driver server
I0815 00:20:17.110553   30183 main.go:141] libmachine: (functional-732793) Calling .Close
I0815 00:20:17.110732   30183 main.go:141] libmachine: Successfully made call to close driver server
I0815 00:20:17.110765   30183 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 00:20:17.110767   30183 main.go:141] libmachine: (functional-732793) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732793 ssh pgrep buildkitd: exit status 1 (183.610031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image build -t localhost/my-image:functional-732793 testdata/build --alsologtostderr
2024/08/15 00:20:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 image build -t localhost/my-image:functional-732793 testdata/build --alsologtostderr: (3.11222315s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-732793 image build -t localhost/my-image:functional-732793 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 962a51b902a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-732793
--> b844efe7940
Successfully tagged localhost/my-image:functional-732793
b844efe7940ae540ae78258e7d4c232c720c9c30130fdd6753f59fe1a0d703e0
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-732793 image build -t localhost/my-image:functional-732793 testdata/build --alsologtostderr:
I0815 00:20:17.336483   30238 out.go:291] Setting OutFile to fd 1 ...
I0815 00:20:17.336766   30238 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:20:17.336776   30238 out.go:304] Setting ErrFile to fd 2...
I0815 00:20:17.336780   30238 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0815 00:20:17.336972   30238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
I0815 00:20:17.337540   30238 config.go:182] Loaded profile config "functional-732793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:20:17.338027   30238 config.go:182] Loaded profile config "functional-732793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 00:20:17.338367   30238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 00:20:17.338413   30238 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 00:20:17.353029   30238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44167
I0815 00:20:17.353527   30238 main.go:141] libmachine: () Calling .GetVersion
I0815 00:20:17.353995   30238 main.go:141] libmachine: Using API Version  1
I0815 00:20:17.354018   30238 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 00:20:17.354311   30238 main.go:141] libmachine: () Calling .GetMachineName
I0815 00:20:17.354512   30238 main.go:141] libmachine: (functional-732793) Calling .GetState
I0815 00:20:17.356140   30238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 00:20:17.356172   30238 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 00:20:17.371950   30238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42103
I0815 00:20:17.372290   30238 main.go:141] libmachine: () Calling .GetVersion
I0815 00:20:17.372776   30238 main.go:141] libmachine: Using API Version  1
I0815 00:20:17.372800   30238 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 00:20:17.373062   30238 main.go:141] libmachine: () Calling .GetMachineName
I0815 00:20:17.373224   30238 main.go:141] libmachine: (functional-732793) Calling .DriverName
I0815 00:20:17.373392   30238 ssh_runner.go:195] Run: systemctl --version
I0815 00:20:17.373420   30238 main.go:141] libmachine: (functional-732793) Calling .GetSSHHostname
I0815 00:20:17.376127   30238 main.go:141] libmachine: (functional-732793) DBG | domain functional-732793 has defined MAC address 52:54:00:93:d5:62 in network mk-functional-732793
I0815 00:20:17.376530   30238 main.go:141] libmachine: (functional-732793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:d5:62", ip: ""} in network mk-functional-732793: {Iface:virbr1 ExpiryTime:2024-08-15 01:17:31 +0000 UTC Type:0 Mac:52:54:00:93:d5:62 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:functional-732793 Clientid:01:52:54:00:93:d5:62}
I0815 00:20:17.376561   30238 main.go:141] libmachine: (functional-732793) DBG | domain functional-732793 has defined IP address 192.168.39.240 and MAC address 52:54:00:93:d5:62 in network mk-functional-732793
I0815 00:20:17.376660   30238 main.go:141] libmachine: (functional-732793) Calling .GetSSHPort
I0815 00:20:17.376805   30238 main.go:141] libmachine: (functional-732793) Calling .GetSSHKeyPath
I0815 00:20:17.376918   30238 main.go:141] libmachine: (functional-732793) Calling .GetSSHUsername
I0815 00:20:17.377011   30238 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/functional-732793/id_rsa Username:docker}
I0815 00:20:17.454489   30238 build_images.go:161] Building image from path: /tmp/build.590585426.tar
I0815 00:20:17.454564   30238 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0815 00:20:17.464276   30238 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.590585426.tar
I0815 00:20:17.468278   30238 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.590585426.tar: stat -c "%s %y" /var/lib/minikube/build/build.590585426.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.590585426.tar': No such file or directory
I0815 00:20:17.468311   30238 ssh_runner.go:362] scp /tmp/build.590585426.tar --> /var/lib/minikube/build/build.590585426.tar (3072 bytes)
I0815 00:20:17.491516   30238 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.590585426
I0815 00:20:17.500480   30238 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.590585426 -xf /var/lib/minikube/build/build.590585426.tar
I0815 00:20:17.509085   30238 crio.go:315] Building image: /var/lib/minikube/build/build.590585426
I0815 00:20:17.509168   30238 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-732793 /var/lib/minikube/build/build.590585426 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0815 00:20:20.379245   30238 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-732793 /var/lib/minikube/build/build.590585426 --cgroup-manager=cgroupfs: (2.870041883s)
I0815 00:20:20.379331   30238 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.590585426
I0815 00:20:20.393288   30238 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.590585426.tar
I0815 00:20:20.405416   30238 build_images.go:217] Built localhost/my-image:functional-732793 from /tmp/build.590585426.tar
I0815 00:20:20.405444   30238 build_images.go:133] succeeded building to: functional-732793
I0815 00:20:20.405448   30238 build_images.go:134] failed building to: 
I0815 00:20:20.405508   30238 main.go:141] libmachine: Making call to close driver server
I0815 00:20:20.405525   30238 main.go:141] libmachine: (functional-732793) Calling .Close
I0815 00:20:20.405774   30238 main.go:141] libmachine: Successfully made call to close driver server
I0815 00:20:20.405803   30238 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 00:20:20.405810   30238 main.go:141] libmachine: Making call to close driver server
I0815 00:20:20.405810   30238 main.go:141] libmachine: (functional-732793) DBG | Closing plugin on server side
I0815 00:20:20.405817   30238 main.go:141] libmachine: (functional-732793) Calling .Close
I0815 00:20:20.406006   30238 main.go:141] libmachine: Successfully made call to close driver server
I0815 00:20:20.406017   30238 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.702864947s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-732793
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (51.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-732793 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-732793 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-zcj6d" [52fb8757-73b4-4d7a-952c-8cca2f9a763d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-zcj6d" [52fb8757-73b4-4d7a-952c-8cca2f9a763d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 51.004066798s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (51.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image load --daemon kicbase/echo-server:functional-732793 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 image load --daemon kicbase/echo-server:functional-732793 --alsologtostderr: (1.076748487s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image load --daemon kicbase/echo-server:functional-732793 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-732793
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image load --daemon kicbase/echo-server:functional-732793 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (6.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image save kicbase/echo-server:functional-732793 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 image save kicbase/echo-server:functional-732793 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (6.49372802s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (6.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image rm kicbase/echo-server:functional-732793 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.220312623s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-732793
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 image save --daemon kicbase/echo-server:functional-732793 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 image save --daemon kicbase/echo-server:functional-732793 --alsologtostderr: (2.26151836s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-732793
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "221.475849ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.550851ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "257.109491ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "51.895452ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-732793 /tmp/TestFunctionalparallelMountCmdany-port53749376/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723681205215064154" to /tmp/TestFunctionalparallelMountCmdany-port53749376/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723681205215064154" to /tmp/TestFunctionalparallelMountCmdany-port53749376/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723681205215064154" to /tmp/TestFunctionalparallelMountCmdany-port53749376/001/test-1723681205215064154
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732793 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (213.516075ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 15 00:20 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 15 00:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 15 00:20 test-1723681205215064154
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh cat /mount-9p/test-1723681205215064154
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-732793 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [aa5faec1-a324-4695-9bfe-b9170fa432b6] Pending
helpers_test.go:344: "busybox-mount" [aa5faec1-a324-4695-9bfe-b9170fa432b6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0815 00:20:07.579146   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [aa5faec1-a324-4695-9bfe-b9170fa432b6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [aa5faec1-a324-4695-9bfe-b9170fa432b6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003851567s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-732793 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-732793 /tmp/TestFunctionalparallelMountCmdany-port53749376/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-732793 /tmp/TestFunctionalparallelMountCmdspecific-port3863353322/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732793 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (214.554544ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-732793 /tmp/TestFunctionalparallelMountCmdspecific-port3863353322/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-732793 ssh "sudo umount -f /mount-9p": exit status 1 (225.019725ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-732793 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-732793 /tmp/TestFunctionalparallelMountCmdspecific-port3863353322/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-732793 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2018434512/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-732793 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2018434512/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-732793 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2018434512/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-732793 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-732793 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2018434512/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-732793 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2018434512/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-732793 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2018434512/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 service list
functional_test.go:1459: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 service list: (1.222324542s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-732793 service list -o json: (1.222544212s)
functional_test.go:1494: Took "1.222659259s" to run "out/minikube-linux-amd64 -p functional-732793 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.240:30208
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-732793 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.240:30208
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-732793
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-732793
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-732793
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-863044 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0815 00:21:29.501284   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:23:45.641339   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-863044 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.982648407s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-863044 -- rollout status deployment/busybox: (4.945749658s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-ck6d9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-dpcjf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-zmr7b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-ck6d9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-dpcjf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-zmr7b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-ck6d9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-dpcjf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-zmr7b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-ck6d9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-ck6d9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-dpcjf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-dpcjf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-zmr7b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-863044 -- exec busybox-7dff88458-zmr7b -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-863044 -v=7 --alsologtostderr
E0815 00:24:13.343068   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:24:41.523256   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:24:41.529619   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:24:41.540964   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:24:41.562708   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:24:41.604108   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:24:41.685520   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:24:41.847044   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:24:42.169102   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:24:42.810487   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:24:44.092149   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:24:46.653848   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:24:51.775586   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-863044 -v=7 --alsologtostderr: (58.813893014s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-863044 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status --output json -v=7 --alsologtostderr
E0815 00:25:02.017593   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp testdata/cp-test.txt ha-863044:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3188715365/001/cp-test_ha-863044.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044:/home/docker/cp-test.txt ha-863044-m02:/home/docker/cp-test_ha-863044_ha-863044-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m02 "sudo cat /home/docker/cp-test_ha-863044_ha-863044-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044:/home/docker/cp-test.txt ha-863044-m03:/home/docker/cp-test_ha-863044_ha-863044-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m03 "sudo cat /home/docker/cp-test_ha-863044_ha-863044-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044:/home/docker/cp-test.txt ha-863044-m04:/home/docker/cp-test_ha-863044_ha-863044-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m04 "sudo cat /home/docker/cp-test_ha-863044_ha-863044-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp testdata/cp-test.txt ha-863044-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3188715365/001/cp-test_ha-863044-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044-m02:/home/docker/cp-test.txt ha-863044:/home/docker/cp-test_ha-863044-m02_ha-863044.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044 "sudo cat /home/docker/cp-test_ha-863044-m02_ha-863044.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044-m02:/home/docker/cp-test.txt ha-863044-m03:/home/docker/cp-test_ha-863044-m02_ha-863044-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m03 "sudo cat /home/docker/cp-test_ha-863044-m02_ha-863044-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044-m02:/home/docker/cp-test.txt ha-863044-m04:/home/docker/cp-test_ha-863044-m02_ha-863044-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m04 "sudo cat /home/docker/cp-test_ha-863044-m02_ha-863044-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp testdata/cp-test.txt ha-863044-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3188715365/001/cp-test_ha-863044-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt ha-863044:/home/docker/cp-test_ha-863044-m03_ha-863044.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044 "sudo cat /home/docker/cp-test_ha-863044-m03_ha-863044.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt ha-863044-m02:/home/docker/cp-test_ha-863044-m03_ha-863044-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m02 "sudo cat /home/docker/cp-test_ha-863044-m03_ha-863044-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044-m03:/home/docker/cp-test.txt ha-863044-m04:/home/docker/cp-test_ha-863044-m03_ha-863044-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m04 "sudo cat /home/docker/cp-test_ha-863044-m03_ha-863044-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp testdata/cp-test.txt ha-863044-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3188715365/001/cp-test_ha-863044-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt ha-863044:/home/docker/cp-test_ha-863044-m04_ha-863044.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044 "sudo cat /home/docker/cp-test_ha-863044-m04_ha-863044.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt ha-863044-m02:/home/docker/cp-test_ha-863044-m04_ha-863044-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m02 "sudo cat /home/docker/cp-test_ha-863044-m04_ha-863044-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 cp ha-863044-m04:/home/docker/cp-test.txt ha-863044-m03:/home/docker/cp-test_ha-863044-m04_ha-863044-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 ssh -n ha-863044-m03 "sudo cat /home/docker/cp-test_ha-863044-m04_ha-863044-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.473522835s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-863044 node delete m03 -v=7 --alsologtostderr: (15.805604234s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (453.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-863044 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0815 00:38:45.640939   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:39:41.524100   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:41:04.588885   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:43:45.641053   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 00:44:41.522643   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-863044 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m32.790295882s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (453.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-863044 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-863044 --control-plane -v=7 --alsologtostderr: (1m15.16649404s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-863044 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-384272 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-384272 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m15.633816904s)
--- PASS: TestJSONOutput/start/Command (75.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-384272 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-384272 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.58s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-384272 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-384272 --output=json --user=testUser: (6.584652822s)
--- PASS: TestJSONOutput/stop/Command (6.58s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-729628 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-729628 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.847927ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c513ff59-a108-4dfe-a128-69ae9408a988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-729628] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"874d1c15-91e4-47e1-9cac-41306658dc65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19443"}}
	{"specversion":"1.0","id":"a7e1d166-36e8-46f1-91a0-ab57e04744b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"084053ca-e6be-45f7-ab4f-9289ca0a3cb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig"}}
	{"specversion":"1.0","id":"49a536c5-2381-4727-9ade-a6942c41c3b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube"}}
	{"specversion":"1.0","id":"051c3df9-791d-4824-bebf-f57479768c39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d30591ff-76bb-4a54-a18c-7849caf7ac5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a6628b61-a9a9-4631-950b-39f21854f7a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-729628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-729628
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (83.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-067918 --driver=kvm2  --container-runtime=crio
E0815 00:48:45.640612   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-067918 --driver=kvm2  --container-runtime=crio: (43.160934002s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-070692 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-070692 --driver=kvm2  --container-runtime=crio: (37.274825516s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-067918
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-070692
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-070692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-070692
helpers_test.go:175: Cleaning up "first-067918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-067918
--- PASS: TestMinikubeProfile (83.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-053854 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0815 00:49:41.522775   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-053854 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.053672242s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-053854 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-053854 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-070566 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-070566 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.421822805s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-070566 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-070566 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-053854 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-070566 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-070566 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-070566
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-070566: (1.263709129s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-070566
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-070566: (21.302032969s)
--- PASS: TestMountStart/serial/RestartStopped (22.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-070566 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-070566 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-978269 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0815 00:51:48.707465   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-978269 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.745136538s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-978269 -- rollout status deployment/busybox: (3.930841532s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- exec busybox-7dff88458-7t6jw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- exec busybox-7dff88458-ln6j4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- exec busybox-7dff88458-7t6jw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- exec busybox-7dff88458-ln6j4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- exec busybox-7dff88458-7t6jw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- exec busybox-7dff88458-ln6j4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- exec busybox-7dff88458-7t6jw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- exec busybox-7dff88458-7t6jw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- exec busybox-7dff88458-ln6j4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-978269 -- exec busybox-7dff88458-ln6j4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-978269 -v 3 --alsologtostderr
E0815 00:53:45.640532   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-978269 -v 3 --alsologtostderr: (50.380666045s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.92s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-978269 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 cp testdata/cp-test.txt multinode-978269:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 cp multinode-978269:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1195475749/001/cp-test_multinode-978269.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 cp multinode-978269:/home/docker/cp-test.txt multinode-978269-m02:/home/docker/cp-test_multinode-978269_multinode-978269-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269-m02 "sudo cat /home/docker/cp-test_multinode-978269_multinode-978269-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 cp multinode-978269:/home/docker/cp-test.txt multinode-978269-m03:/home/docker/cp-test_multinode-978269_multinode-978269-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269-m03 "sudo cat /home/docker/cp-test_multinode-978269_multinode-978269-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 cp testdata/cp-test.txt multinode-978269-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 cp multinode-978269-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1195475749/001/cp-test_multinode-978269-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 cp multinode-978269-m02:/home/docker/cp-test.txt multinode-978269:/home/docker/cp-test_multinode-978269-m02_multinode-978269.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269 "sudo cat /home/docker/cp-test_multinode-978269-m02_multinode-978269.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 cp multinode-978269-m02:/home/docker/cp-test.txt multinode-978269-m03:/home/docker/cp-test_multinode-978269-m02_multinode-978269-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269-m03 "sudo cat /home/docker/cp-test_multinode-978269-m02_multinode-978269-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 cp testdata/cp-test.txt multinode-978269-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 cp multinode-978269-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1195475749/001/cp-test_multinode-978269-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 cp multinode-978269-m03:/home/docker/cp-test.txt multinode-978269:/home/docker/cp-test_multinode-978269-m03_multinode-978269.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269 "sudo cat /home/docker/cp-test_multinode-978269-m03_multinode-978269.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 cp multinode-978269-m03:/home/docker/cp-test.txt multinode-978269-m02:/home/docker/cp-test_multinode-978269-m03_multinode-978269-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 ssh -n multinode-978269-m02 "sudo cat /home/docker/cp-test_multinode-978269-m03_multinode-978269-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-978269 node stop m03: (1.365687273s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-978269 status: exit status 7 (401.108025ms)

                                                
                                                
-- stdout --
	multinode-978269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-978269-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-978269-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-978269 status --alsologtostderr: exit status 7 (406.953339ms)

                                                
                                                
-- stdout --
	multinode-978269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-978269-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-978269-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 00:54:00.043195   48598 out.go:291] Setting OutFile to fd 1 ...
	I0815 00:54:00.043310   48598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:54:00.043318   48598 out.go:304] Setting ErrFile to fd 2...
	I0815 00:54:00.043323   48598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 00:54:00.043477   48598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 00:54:00.043623   48598 out.go:298] Setting JSON to false
	I0815 00:54:00.043645   48598 mustload.go:65] Loading cluster: multinode-978269
	I0815 00:54:00.043666   48598 notify.go:220] Checking for updates...
	I0815 00:54:00.044050   48598 config.go:182] Loaded profile config "multinode-978269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 00:54:00.044064   48598 status.go:255] checking status of multinode-978269 ...
	I0815 00:54:00.044474   48598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:54:00.044511   48598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:54:00.064026   48598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41269
	I0815 00:54:00.064437   48598 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:54:00.064920   48598 main.go:141] libmachine: Using API Version  1
	I0815 00:54:00.064945   48598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:54:00.065243   48598 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:54:00.065409   48598 main.go:141] libmachine: (multinode-978269) Calling .GetState
	I0815 00:54:00.066727   48598 status.go:330] multinode-978269 host status = "Running" (err=<nil>)
	I0815 00:54:00.066744   48598 host.go:66] Checking if "multinode-978269" exists ...
	I0815 00:54:00.067126   48598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:54:00.067166   48598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:54:00.082489   48598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39647
	I0815 00:54:00.082885   48598 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:54:00.083426   48598 main.go:141] libmachine: Using API Version  1
	I0815 00:54:00.083445   48598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:54:00.083781   48598 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:54:00.083958   48598 main.go:141] libmachine: (multinode-978269) Calling .GetIP
	I0815 00:54:00.086344   48598 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:54:00.086732   48598 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:54:00.086768   48598 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:54:00.086880   48598 host.go:66] Checking if "multinode-978269" exists ...
	I0815 00:54:00.087155   48598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:54:00.087194   48598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:54:00.101626   48598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36323
	I0815 00:54:00.102057   48598 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:54:00.102524   48598 main.go:141] libmachine: Using API Version  1
	I0815 00:54:00.102544   48598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:54:00.102807   48598 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:54:00.102987   48598 main.go:141] libmachine: (multinode-978269) Calling .DriverName
	I0815 00:54:00.103161   48598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:54:00.103180   48598 main.go:141] libmachine: (multinode-978269) Calling .GetSSHHostname
	I0815 00:54:00.105803   48598 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:54:00.106202   48598 main.go:141] libmachine: (multinode-978269) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:90:59", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:51:16 +0000 UTC Type:0 Mac:52:54:00:78:90:59 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-978269 Clientid:01:52:54:00:78:90:59}
	I0815 00:54:00.106236   48598 main.go:141] libmachine: (multinode-978269) DBG | domain multinode-978269 has defined IP address 192.168.39.9 and MAC address 52:54:00:78:90:59 in network mk-multinode-978269
	I0815 00:54:00.106340   48598 main.go:141] libmachine: (multinode-978269) Calling .GetSSHPort
	I0815 00:54:00.106493   48598 main.go:141] libmachine: (multinode-978269) Calling .GetSSHKeyPath
	I0815 00:54:00.106652   48598 main.go:141] libmachine: (multinode-978269) Calling .GetSSHUsername
	I0815 00:54:00.106799   48598 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/multinode-978269/id_rsa Username:docker}
	I0815 00:54:00.187576   48598 ssh_runner.go:195] Run: systemctl --version
	I0815 00:54:00.193132   48598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:54:00.207710   48598 kubeconfig.go:125] found "multinode-978269" server: "https://192.168.39.9:8443"
	I0815 00:54:00.207749   48598 api_server.go:166] Checking apiserver status ...
	I0815 00:54:00.207801   48598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 00:54:00.220384   48598 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1117/cgroup
	W0815 00:54:00.228416   48598 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1117/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 00:54:00.228473   48598 ssh_runner.go:195] Run: ls
	I0815 00:54:00.236817   48598 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I0815 00:54:00.240396   48598 api_server.go:279] https://192.168.39.9:8443/healthz returned 200:
	ok
	I0815 00:54:00.240418   48598 status.go:422] multinode-978269 apiserver status = Running (err=<nil>)
	I0815 00:54:00.240430   48598 status.go:257] multinode-978269 status: &{Name:multinode-978269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:54:00.240463   48598 status.go:255] checking status of multinode-978269-m02 ...
	I0815 00:54:00.240878   48598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:54:00.240923   48598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:54:00.255700   48598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45941
	I0815 00:54:00.256156   48598 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:54:00.256684   48598 main.go:141] libmachine: Using API Version  1
	I0815 00:54:00.256705   48598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:54:00.257083   48598 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:54:00.257284   48598 main.go:141] libmachine: (multinode-978269-m02) Calling .GetState
	I0815 00:54:00.258760   48598 status.go:330] multinode-978269-m02 host status = "Running" (err=<nil>)
	I0815 00:54:00.258773   48598 host.go:66] Checking if "multinode-978269-m02" exists ...
	I0815 00:54:00.259030   48598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:54:00.259059   48598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:54:00.273419   48598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37087
	I0815 00:54:00.273733   48598 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:54:00.274178   48598 main.go:141] libmachine: Using API Version  1
	I0815 00:54:00.274195   48598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:54:00.274466   48598 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:54:00.274620   48598 main.go:141] libmachine: (multinode-978269-m02) Calling .GetIP
	I0815 00:54:00.276997   48598 main.go:141] libmachine: (multinode-978269-m02) DBG | domain multinode-978269-m02 has defined MAC address 52:54:00:41:5f:d7 in network mk-multinode-978269
	I0815 00:54:00.277322   48598 main.go:141] libmachine: (multinode-978269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:5f:d7", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:52:18 +0000 UTC Type:0 Mac:52:54:00:41:5f:d7 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-978269-m02 Clientid:01:52:54:00:41:5f:d7}
	I0815 00:54:00.277357   48598 main.go:141] libmachine: (multinode-978269-m02) DBG | domain multinode-978269-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:41:5f:d7 in network mk-multinode-978269
	I0815 00:54:00.277472   48598 host.go:66] Checking if "multinode-978269-m02" exists ...
	I0815 00:54:00.277891   48598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:54:00.277931   48598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:54:00.292388   48598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36515
	I0815 00:54:00.292905   48598 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:54:00.293330   48598 main.go:141] libmachine: Using API Version  1
	I0815 00:54:00.293348   48598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:54:00.293622   48598 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:54:00.293813   48598 main.go:141] libmachine: (multinode-978269-m02) Calling .DriverName
	I0815 00:54:00.293977   48598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 00:54:00.293992   48598 main.go:141] libmachine: (multinode-978269-m02) Calling .GetSSHHostname
	I0815 00:54:00.296423   48598 main.go:141] libmachine: (multinode-978269-m02) DBG | domain multinode-978269-m02 has defined MAC address 52:54:00:41:5f:d7 in network mk-multinode-978269
	I0815 00:54:00.296762   48598 main.go:141] libmachine: (multinode-978269-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:5f:d7", ip: ""} in network mk-multinode-978269: {Iface:virbr1 ExpiryTime:2024-08-15 01:52:18 +0000 UTC Type:0 Mac:52:54:00:41:5f:d7 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-978269-m02 Clientid:01:52:54:00:41:5f:d7}
	I0815 00:54:00.296797   48598 main.go:141] libmachine: (multinode-978269-m02) DBG | domain multinode-978269-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:41:5f:d7 in network mk-multinode-978269
	I0815 00:54:00.296946   48598 main.go:141] libmachine: (multinode-978269-m02) Calling .GetSSHPort
	I0815 00:54:00.297095   48598 main.go:141] libmachine: (multinode-978269-m02) Calling .GetSSHKeyPath
	I0815 00:54:00.297253   48598 main.go:141] libmachine: (multinode-978269-m02) Calling .GetSSHUsername
	I0815 00:54:00.297356   48598 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19443-13088/.minikube/machines/multinode-978269-m02/id_rsa Username:docker}
	I0815 00:54:00.378905   48598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 00:54:00.391791   48598 status.go:257] multinode-978269-m02 status: &{Name:multinode-978269-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0815 00:54:00.391831   48598 status.go:255] checking status of multinode-978269-m03 ...
	I0815 00:54:00.392196   48598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 00:54:00.392241   48598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 00:54:00.406950   48598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39145
	I0815 00:54:00.407361   48598 main.go:141] libmachine: () Calling .GetVersion
	I0815 00:54:00.407819   48598 main.go:141] libmachine: Using API Version  1
	I0815 00:54:00.407839   48598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 00:54:00.408090   48598 main.go:141] libmachine: () Calling .GetMachineName
	I0815 00:54:00.408249   48598 main.go:141] libmachine: (multinode-978269-m03) Calling .GetState
	I0815 00:54:00.409786   48598 status.go:330] multinode-978269-m03 host status = "Stopped" (err=<nil>)
	I0815 00:54:00.409807   48598 status.go:343] host is not running, skipping remaining checks
	I0815 00:54:00.409813   48598 status.go:257] multinode-978269-m03 status: &{Name:multinode-978269-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-978269 node start m03 -v=7 --alsologtostderr: (38.259457006s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-978269 node delete m03: (1.640064865s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (174.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-978269 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0815 01:03:45.641198   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:04:41.523342   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-978269 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m54.366006266s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-978269 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (174.89s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-978269
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-978269-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-978269-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (56.786328ms)

                                                
                                                
-- stdout --
	* [multinode-978269-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-978269-m02' is duplicated with machine name 'multinode-978269-m02' in profile 'multinode-978269'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-978269-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-978269-m03 --driver=kvm2  --container-runtime=crio: (42.287495181s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-978269
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-978269: exit status 80 (202.037009ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-978269 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-978269-m03 already exists in multinode-978269-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-978269-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.59s)

                                                
                                    
x
+
TestScheduledStopUnix (110.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-532336 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-532336 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.616394266s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-532336 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-532336 -n scheduled-stop-532336
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-532336 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-532336 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-532336 -n scheduled-stop-532336
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-532336
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-532336 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-532336
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-532336: exit status 7 (64.003922ms)

                                                
                                                
-- stdout --
	scheduled-stop-532336
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-532336 -n scheduled-stop-532336
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-532336 -n scheduled-stop-532336: exit status 7 (63.564742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-532336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-532336
--- PASS: TestScheduledStopUnix (110.10s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (214.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1254734539 start -p running-upgrade-339919 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1254734539 start -p running-upgrade-339919 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m58.462091693s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-339919 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0815 01:14:41.523568   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-339919 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.599666837s)
helpers_test.go:175: Cleaning up "running-upgrade-339919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-339919
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-339919: (1.142864158s)
--- PASS: TestRunningBinaryUpgrade (214.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-312183 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-312183 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (71.339687ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-312183] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (92.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-312183 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-312183 --driver=kvm2  --container-runtime=crio: (1m32.290898467s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-312183 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (92.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-641488 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-641488 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (117.698498ms)

                                                
                                                
-- stdout --
	* [false-641488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19443
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 01:13:26.066015   56751 out.go:291] Setting OutFile to fd 1 ...
	I0815 01:13:26.066177   56751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:13:26.066190   56751 out.go:304] Setting ErrFile to fd 2...
	I0815 01:13:26.066196   56751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0815 01:13:26.066486   56751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19443-13088/.minikube/bin
	I0815 01:13:26.067283   56751 out.go:298] Setting JSON to false
	I0815 01:13:26.068605   56751 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6951,"bootTime":1723677455,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 01:13:26.068697   56751 start.go:139] virtualization: kvm guest
	I0815 01:13:26.077370   56751 out.go:177] * [false-641488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 01:13:26.078814   56751 out.go:177]   - MINIKUBE_LOCATION=19443
	I0815 01:13:26.078832   56751 notify.go:220] Checking for updates...
	I0815 01:13:26.081251   56751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 01:13:26.082597   56751 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19443-13088/kubeconfig
	I0815 01:13:26.083926   56751 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19443-13088/.minikube
	I0815 01:13:26.085027   56751 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 01:13:26.086243   56751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 01:13:26.088004   56751 config.go:182] Loaded profile config "NoKubernetes-312183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:13:26.088161   56751 config.go:182] Loaded profile config "offline-crio-278022": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 01:13:26.088269   56751 config.go:182] Loaded profile config "running-upgrade-339919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0815 01:13:26.088385   56751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 01:13:26.131892   56751 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 01:13:26.133181   56751 start.go:297] selected driver: kvm2
	I0815 01:13:26.133205   56751 start.go:901] validating driver "kvm2" against <nil>
	I0815 01:13:26.133222   56751 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 01:13:26.135526   56751 out.go:177] 
	W0815 01:13:26.136804   56751 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0815 01:13:26.137997   56751 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-641488 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-641488

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-641488

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-641488

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-641488

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-641488

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-641488

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-641488

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-641488

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-641488

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-641488

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-641488

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-641488" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-641488" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-641488

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-641488"

                                                
                                                
----------------------- debugLogs end: false-641488 [took: 2.917238407s] --------------------------------
helpers_test.go:175: Cleaning up "false-641488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-641488
--- PASS: TestNetworkPlugins/group/false (3.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-312183 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0815 01:14:24.593880   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-312183 --no-kubernetes --driver=kvm2  --container-runtime=crio: (36.979503448s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-312183 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-312183 status -o json: exit status 2 (243.887714ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-312183","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-312183
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-312183: (1.008261942s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (105.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1884414241 start -p stopped-upgrade-284326 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1884414241 start -p stopped-upgrade-284326 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m4.132611764s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1884414241 -p stopped-upgrade-284326 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1884414241 -p stopped-upgrade-284326 stop: (1.335059645s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-284326 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-284326 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.893151007s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (105.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (43.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-312183 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-312183 --no-kubernetes --driver=kvm2  --container-runtime=crio: (43.684156543s)
--- PASS: TestNoKubernetes/serial/Start (43.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-312183 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-312183 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.242833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (17.430499064s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (12.517630776s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-312183
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-312183: (1.305408232s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-312183 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-312183 --driver=kvm2  --container-runtime=crio: (21.760226421s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-284326
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-312183 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-312183 "sudo systemctl is-active --quiet service kubelet": exit status 1 (183.83901ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestPause/serial/Start (120.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-064537 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-064537 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m0.938141938s)
--- PASS: TestPause/serial/Start (120.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (101.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-884893 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-884893 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m41.629238848s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (101.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-190398 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-190398 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m25.276894816s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-884893 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f85e2fe7-34bc-4224-a841-881cc2362de4] Pending
helpers_test.go:344: "busybox" [f85e2fe7-34bc-4224-a841-881cc2362de4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f85e2fe7-34bc-4224-a841-881cc2362de4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004629208s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-884893 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-884893 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-884893 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-190398 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [50557ddb-b4d8-4a21-8243-8558f955147a] Pending
helpers_test.go:344: "busybox" [50557ddb-b4d8-4a21-8243-8558f955147a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [50557ddb-b4d8-4a21-8243-8558f955147a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00467133s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-190398 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-018537 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-018537 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m25.285208782s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-190398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-190398 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-018537 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a262790f-9f48-41d8-ac94-90f4f9e60087] Pending
helpers_test.go:344: "busybox" [a262790f-9f48-41d8-ac94-90f4f9e60087] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a262790f-9f48-41d8-ac94-90f4f9e60087] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004288446s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-018537 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-018537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-018537 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (680.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-884893 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-884893 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (11m20.007336797s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-884893 -n no-preload-884893
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (680.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-390782 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-390782 --alsologtostderr -v=3: (3.276479822s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-390782 -n old-k8s-version-390782: exit status 7 (66.418342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-390782 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (582.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-190398 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 01:24:41.522718   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:25:08.710776   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-190398 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m42.129119187s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190398 -n embed-certs-190398
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (582.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (479.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-018537 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 01:28:45.640951   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:29:41.522718   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:31:04.595246   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:33:45.640594   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-018537 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (7m59.451908947s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-018537 -n default-k8s-diff-port-018537
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (479.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-840156 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 01:48:45.640856   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-840156 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (47.868834542s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-840156 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-840156 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-840156 --alsologtostderr -v=3: (9.512619261s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-840156 -n newest-cni-840156
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-840156 -n newest-cni-840156: exit status 7 (64.002884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-840156 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-840156 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-840156 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (37.153305639s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-840156 -n newest-cni-840156
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0815 01:49:41.523176   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/functional-732793/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m40.978407629s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-840156 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-840156 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-840156 --alsologtostderr -v=1: (1.819995862s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-840156 -n newest-cni-840156
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-840156 -n newest-cni-840156: exit status 2 (323.726262ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-840156 -n newest-cni-840156
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-840156 -n newest-cni-840156: exit status 2 (283.242524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-840156 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-840156 -n newest-cni-840156
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-840156 -n newest-cni-840156
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m11.570345611s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0815 01:51:07.486181   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:07.492622   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:07.504001   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:07.525460   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:07.566887   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:07.648809   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:07.810293   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:08.131964   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:51:08.773453   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m21.358661082s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-641488 "pgrep -a kubelet"
E0815 01:51:10.054909   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-641488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5874g" [0a3977f1-465e-46c1-aabc-76d12e37f092] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5874g" [0a3977f1-465e-46c1-aabc-76d12e37f092] Running
E0815 01:51:17.738725   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.012941129s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xjzwm" [3a365b86-b92a-4ff5-8129-57fe80c8c6fe] Running
E0815 01:51:12.616824   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005912385s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-641488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-641488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gvrbx" [593e1b79-b1de-4dae-aa1b-48b2b9bd4f2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gvrbx" [593e1b79-b1de-4dae-aa1b-48b2b9bd4f2e] Running
E0815 01:51:27.980453   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004807855s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-641488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-641488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m14.100407265s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0815 01:51:48.462699   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m18.66360821s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rw6zm" [e5f02a5c-9f41-4b02-aa66-17d5e34e3ab7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005199548s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-641488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-641488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jt6j7" [c42a9218-e336-4fea-95e4-328cd6c14486] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jt6j7" [c42a9218-e336-4fea-95e4-328cd6c14486] Running
E0815 01:52:05.966878   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.146352891s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (104.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0815 01:52:00.836891   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:00.843277   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:00.854679   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:00.876104   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:00.917568   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:00.998995   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:01.160547   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:01.482208   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:02.123601   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:03.405308   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m44.139374418s)
--- PASS: TestNetworkPlugins/group/flannel/Start (104.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-641488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0815 01:52:11.088649   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (119.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0815 01:52:29.424614   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
E0815 01:52:41.812289   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/old-k8s-version-390782/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-641488 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m59.49114277s)
--- PASS: TestNetworkPlugins/group/bridge/Start (119.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-641488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-641488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-59q67" [835a993a-b9fd-4696-a5c0-58e42a3ff44f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-59q67" [835a993a-b9fd-4696-a5c0-58e42a3ff44f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.0772091s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-641488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-641488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-641488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dlt4c" [0077fcff-552a-4741-ba3d-6e39fa300084] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dlt4c" [0077fcff-552a-4741-ba3d-6e39fa300084] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004557943s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-641488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-52rks" [ea638b2c-2c00-4fa4-a67d-24fa4d0c3cc2] Running
E0815 01:53:45.640594   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/addons-799058/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003654656s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-641488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-641488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r4svj" [cebb7d55-e14a-4984-87a0-a34abadac232] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0815 01:53:51.346030   20279 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19443-13088/.minikube/profiles/no-preload-884893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-r4svj" [cebb7d55-e14a-4984-87a0-a34abadac232] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004112798s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-641488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-641488 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-641488 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xvmt2" [5d9403f3-9fb3-4cd7-bee7-6c784c1a01a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xvmt2" [5d9403f3-9fb3-4cd7-bee7-6c784c1a01a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004682116s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-641488 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-641488 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
257 TestStartStop/group/disable-driver-mounts 0.14
262 TestNetworkPlugins/group/kubenet 3.19
270 TestNetworkPlugins/group/cilium 5.02
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-294760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-294760
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-641488 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-641488

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-641488

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-641488

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-641488

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-641488

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-641488

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-641488

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-641488

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-641488

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-641488

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-641488

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-641488" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-641488" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-641488

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-641488"

                                                
                                                
----------------------- debugLogs end: kubenet-641488 [took: 3.013379176s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-641488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-641488
--- SKIP: TestNetworkPlugins/group/kubenet (3.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-641488 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-641488" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-641488

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-641488" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-641488"

                                                
                                                
----------------------- debugLogs end: cilium-641488 [took: 4.874929441s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-641488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-641488
--- SKIP: TestNetworkPlugins/group/cilium (5.02s)

                                                
                                    
Copied to clipboard